Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 16th May 2020, 05:39   #201  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Katie Boundary View Post
Nonetheless, getting the video from the VOB file to the AVS script and then deinterlacing it can be more easily done in CFR. There's no benefit to bothering with VFR during those steps. Once that's done, it can be converted to VFR, and then temporal cleanup can be applied.
Yes, but "deinterlacing" should only be used on parts that need to be deinterlaced; otherwise you degrade >90% of the footage

It should be IVTCing parts that need to be IVTCed (>90%) , and additional filters for problem sections

Avisynth is technically CFR only anyways; it's the exported timecodes (timestamps) that make the final output VFR

If it's mostly 23.976p, and some 29.97p sections, it's easier. But some people said there were "interlaced" sections; if that' s true, (actual sections with 59.94 different moments in time represented, not fades, not text overlays ), it's harder because 2pass TFM using mode 5 won't pick those up
poisondeathray is online now   Reply With Quote
Old 16th May 2020, 06:58   #202  |  Link
Katie Boundary
Registered User
 
Katie Boundary's Avatar
 
Join Date: Jan 2015
Posts: 1,048
Quote:
Originally Posted by poisondeathray View Post
Yes, but "deinterlacing" should only be used on parts that need to be deinterlaced; otherwise you degrade >90% of the footage

It should be IVTCing parts that need to be IVTCed (>90%) , and additional filters for problem sections
IVTC is a form of deinterlacing. I think you mean that bob-deinterlacing should only be used on the parts that need to be bobbed, which is exactly how my method works.

Quote:
Originally Posted by poisondeathray View Post
If it's mostly 23.976p, and some 29.97p sections, it's easier. But some people said there were "interlaced" sections; if that' s true, (actual sections with 59.94 different moments in time represented, not fades, not text overlays ), it's harder because 2pass TFM using mode 5 won't pick those up
It's worse than that: supposedly, some sections suffer retrograde field behavior. We still haven't discussed how to handle that.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers.
Katie Boundary is offline   Reply With Quote
Old 16th May 2020, 07:37   #203  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,903
but if the source only has up to 30 fps parts then there is no part that needs bobbing so a bob deint will reduce the resolution in half.
huhn is offline   Reply With Quote
Old 16th May 2020, 08:27   #204  |  Link
zapp7
Registered User
 
Join Date: May 2020
Location: Canada
Posts: 49
I'm experimenting with different denoisers for the first season of DS9. I have seen other people advocating for QTGMC as a denoiser run in progressive mode (InputType=1). Is QTGMC run in this fashion still destructive to the resolution of the footage?

Code:
QTGMC(InputType=1, Preset="Medium", EzDenoise=0.1)
zapp7 is offline   Reply With Quote
Old 16th May 2020, 08:46   #205  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,646
From the wiki.
Quote:
Generally mode 1 will retain more detail, but repair less artefacts than modes 2,3. You may consider setting TR2 to a higher value (e.g. 2 or 3) when repairing progressive material.
I quite like the output of Fluxsmooth, Didée came up with a simple command that I use in real-time for noisy and shimmery video, see what you think..
Code:
FluxsmoothST(12,4).FluxsmoothT(4).Merge(Last,0.49)
feel free to play with the values, I haven't done any hardcore non real-time denoising so others might have some better options.

Last edited by ryrynz; 16th May 2020 at 09:03.
ryrynz is offline   Reply With Quote
Old 16th May 2020, 09:25   #206  |  Link
zapp7
Registered User
 
Join Date: May 2020
Location: Canada
Posts: 49
Quote:
Originally Posted by ryrynz View Post
I quite like the output of Fluxsmooth, Didée came up with a simple command that I use in real-time for noisy and shimmery video, see what you think..
Code:
FluxsmoothST(12,4).FluxsmoothT(4).Merge(Last,0.49)
feel free to play with the values, I haven't done any hardcore non real-time denoising so others might have some better options.
Thanks, I'll try it out. I've tested a few filters so far... MCDegrainSharp, TemporalDegrain2, QTGMC, but none really give me good results. One issue is that some will look good for the intro but not for film sections with characters.

Starting to feel burnout and diminishing returns at this point. I might just leave the footage as is, since the differences are subtle. (to my eyes, anyway)
zapp7 is offline   Reply With Quote
Old 16th May 2020, 11:03   #207  |  Link
Groucho2004
 
Join Date: Mar 2006
Location: Barcelona
Posts: 5,034
Quote:
Originally Posted by zapp7 View Post
Seems like avsr64 is similar speed to AVSmeter, however I can run multiple instances of avsr64 and run several episodes in parallel!
It should be about the same speed. As for multiple instances of AVSMeter - RTFM (there's a setting in AVSMeter.ini called "AllowOnlyOneInstance", set it to "0").
__________________
Groucho's Avisynth Stuff

Last edited by Groucho2004; 16th May 2020 at 11:16.
Groucho2004 is offline   Reply With Quote
Old 16th May 2020, 15:43   #208  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Katie Boundary View Post
IVTC is a form of deinterlacing. I think you mean that bob-deinterlacing should only be used on the parts that need to be bobbed, which is exactly how my method works.
No, that's not I mean.

Deinterlacing (any kind) primarily involves resizing a single field (+/- temporal filtering) . If you use it on progressive content, you lose ~1/2 the resolution of a full progressive frame. A full progressive frame consists of 2 fields from the same moment in time - they just need to be matched and weaved.

Double rate deinterlacing should only be used for the 59.94 sections

Single rate deinterlacing of any form (or QTGMC in progressive mode for temporal antialiasing) should only be used for problem sections, orphan fields

IVTC is used for progressive content. This means primarily field matching to get back the full progressive frames (+ decimation if 23.976p), + /- post processing for residual combing (e.g. orphan field, that single field can get deinterlaced) .

If you deinterlace, (single or double) with any method, you will degrade >90% of the content. The main distinction is interlaced vs. progressive content


Quote:
It's worse than that: supposedly, some sections suffer retrograde field behavior. We still haven't discussed how to handle that.
Maybe someone should post a sample, and if verify if there really is 59.94 content. Some people say "interlaced" when it's really only combed for 3:2 hard telecine, or really only an orphan field.

Last edited by poisondeathray; 16th May 2020 at 15:56.
poisondeathray is online now   Reply With Quote
Old 16th May 2020, 15:45   #209  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by zapp7 View Post
I'm experimenting with different denoisers for the first season of DS9. I have seen other people advocating for QTGMC as a denoiser run in progressive mode (InputType=1). Is QTGMC run in this fashion still destructive to the resolution of the footage?

Code:
QTGMC(InputType=1, Preset="Medium", EzDenoise=0.1)
Yes it's destructive

It might be appropriate for the 2-3% problem sections, but you will destroy details in >90% of the other scenes


Quote:
Originally Posted by zapp7 View Post
I've tested a few filters so far... MCDegrainSharp, TemporalDegrain2, QTGMC, but none really give me good results. One issue is that some will look good for the intro but not for film sections with characters.
Exactly. You degrade the 90-95%. No amount of neural net processing can return the lost details
poisondeathray is online now   Reply With Quote
Old 16th May 2020, 18:25   #210  |  Link
JoelHruska
Registered User
 
Join Date: May 2020
Posts: 77
Katie,

I feel like you still don't understand what I'm trying to do, here.

Quote:
That is unfortunate, because your approach of decimating an entire episode down to 23.976 FPS will create problems in 30 FPS or 60 FPS sections that are much worse than judder.
So. Let me try to be even clearer.

I don't care about the output frame rate.
I don't care about the output container.
I don't care about the file extension.

Why, then, am I mucking around with 23.976 fps footage? Because it exists. Why have I converted to every other frame rate? To see what they look like. To see which gives the best output.

Why did I output to 119.88 fps? Because the AviSynth Wiki declares that 119.88 fps output is the most-compatible way to fix judder between 23.976 and 29.97 fps content.

I want good-looking video. Coming into this project, I had no idea what I needed to address in order to get it.

You keep talking about this project like I have declared: "I am going to create the best 23.976 fps encode of DS9 because it MUST BE IN 23.976.

I couldn't give two s**** and a whistle if the final output frame rate is 23.976, 29.97, 35, 42, 49, 60, or 119.88 fps. I don't really care if the output is 1fps, but we play back the content 23.976x faster than normal, if the end result was good-looking footage.

I will admit to one practical boundary: I just spent eight days waiting for an RTX 2080 to finish upscaling "Emissary" after I did a 119.88 fps conversion on it but *before* I applied any other filters or processing. I did it once, so I'd be able to compare the output to future attempts to process the footage, but there's no way I'm waiting four days to process each and every episode of the television show.

I'd like to keep the total processing time below the 24-hour mark per episode, keeping in mind that VEAI imposes a 10 hour encode time all of its own. The DaVinci Studio runs currently take about two hours, so my current process is a minimum of 12 hours of processing per episode.

But that's it. That's all I care about. I'm not wedded to 23.976 fps. I have worked on a 23.976 fps version because my intent is to publish instructions for upscaling the show depending, in part, on what frame rate you want to target. Do you want 23.976 fps? Then I want a workflow for it. I was creating simultaneous comparison shots for 23.976, 29.97, 48, 60, and 119.88 so that people could choose what worked best for them.

I'd like nothing more than to be able to tell people: "Use this process to arrive at the best final project frame rate that will look better than[list of options previously enumerated]. Don't bother screwing around with this other stuff. Just do this."

But I can't possibly tell people that if I don't know what all of the other speeds look like. If I can find one solution that looks better than anything else, I'll recommend it. If I can't, I'll recommend multiple solutions and show people the outputs so they can choose for themselves.

I haven't just explored one frame rate. I've explored like, six frame rates simultaneously. I don't care which one of them the final project uses. I care which one of them makes the final project look the best.

Last edited by JoelHruska; 16th May 2020 at 18:29.
JoelHruska is offline   Reply With Quote
Old 16th May 2020, 19:00   #211  |  Link
Katie Boundary
Registered User
 
Katie Boundary's Avatar
 
Join Date: Jan 2015
Posts: 1,048
I just want to be clear. By retrograde field behavior, we're talking about content that, when run through separatefields().doubleweave() to simulate what it looks like on an NTSC TV, looks perfectly fine, aside from the fact that it's interlaced:

https://imgur.com/a/b62ndpK

But, when you look through it one field at a time using a dumb, temporally naive, double-rate bob-deinterlacer like bob(), it's obvious that the fields are NOT in the right order:

https://imgur.com/a/JqHkaHL

Joel, can you confirm that this kind of pathological content exists in DS9?

Quote:
Originally Posted by poisondeathray View Post
Deinterlacing (any kind) primarily involves resizing a single field (+/- temporal filtering) . If you use it on progressive content, you lose ~1/2 the resolution of a full progressive frame.
Deinterlacing is literally any process that converts interlaced content into progressive content. The specific form of deinterlacing that you're talking about is called bob-deinterlacing. It got that name from the dumb, double-rate, spatial-only filters, which would cause horizontal edges to bob up and down (as seen in the second gif that I posted above).

Quote:
Originally Posted by JoelHruska View Post
You keep talking about this project like I have declared: "I am going to create the best 23.976 fps encode of DS9 because it MUST BE IN 23.976.
No, I keep talking about this project like you're willing to pay any price and introduce any new problem to get rid of judder, and I just pointed out that decimating to 23.976 is the least ideal solution you've experimented with.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers.

Last edited by Katie Boundary; 16th May 2020 at 19:03.
Katie Boundary is offline   Reply With Quote
Old 16th May 2020, 19:11   #212  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Katie Boundary View Post
Deinterlacing is literally any process that converts interlaced content into progressive content. The specific form of deinterlacing that you're talking about is called bob-deinterlacing. It got that name from the dumb, double-rate, spatial-only filters, which would cause horizontal edges to bob up and down (as seen in the second gif that I posted above).
Not necessarily spatial only; deinterlacing can use temporal algorithms (e.g. QTGMC does)

Interlaced content means each field represents a different moment in time. 59.94 different moments in time represented

The problem is you're mixing up Inverse telecine or pulldown removal with Deinterlacing. These are different things. IVTC is NOT a form of deinterlacing as you said, because it's used when underlying content is progressive.

So if you want to use your own definition, you are wrong ("Deinterlacing is literally any process that converts interlaced content into progressive content")

Since >90% of this is progressive content, you should not deinterlace as you suggested
poisondeathray is online now   Reply With Quote
Old 16th May 2020, 19:15   #213  |  Link
Katie Boundary
Registered User
 
Katie Boundary's Avatar
 
Join Date: Jan 2015
Posts: 1,048
Quote:
Originally Posted by poisondeathray View Post
Not necessarily spatial only; deinterlacing can use temporal algorithms (e.g. QTGMC does)
Bobbing got its name from what the the spatial-only versions did. This does NOT imply that all bob-deinterlacing is spatial-only, nor did I say that it did.

Quote:
Originally Posted by poisondeathray View Post
Interlaced content means each field represents a different moment in time. 59.94 different moments in time represented
No, interlaced content means that the content has interlacing in it. How that interlacing got there is irrelevant. If a VOB file is encoded as 24 fps with soft pulldown, and I index it in DGindex with "honor pulldown flags", bam, it's interlaced now.

Quote:
Originally Posted by poisondeathray View Post
The problem is you're mixing up Inverse telecine or pulldown removal with Deinterlacing. These are different things.
I didn't get them mixed up. I know they're different things. But one is a subset of the other.

Quote:
Originally Posted by poisondeathray View Post
So if you want to use your own definition, you are wrong
I'm sorry but your definitions are wrong. Deinterlacing is any process that converts interlaced content to progressive. Bobbing is a form of deinterlacing. Field-matching (which is the first step of IVTC) is also deinterlacing. Blur(1.0) is a form of deinterlacing.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers.

Last edited by Katie Boundary; 16th May 2020 at 19:23.
Katie Boundary is offline   Reply With Quote
Old 16th May 2020, 19:22   #214  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Katie Boundary View Post
Bobbing got its name from what the the spatial-only versions did. This does NOt imply that all bob-deinterlacing is spatial-only.
I never said it was. I included temporal and gave an example



Quote:
Deinterlacing is any process that converts interlaced content to progressive.
That's one definition

Quote:
Bobbing is a form of deinterlacing.
Yes

Quote:
Field-matching (which is the first step of IVTC) is also deinterlacing.
No. These are different by definition

Field matching , as the name suggests is for progressive content. 2 fields from the same moment in time

Interlaced content has only 1/2 the spatial resolution in motion. Single fields.

Quote:
Blur(1.0) is a form of deinterlacing.
It can be

Quote:
I didn't get them mixed up. I know they're different things. But one is a subset of the other.
No . You got them mixed up.
poisondeathray is online now   Reply With Quote
Old 16th May 2020, 19:27   #215  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Katie Boundary View Post

No, interlaced content means that the content has interlacing in it.
Well that's an illuminating statement . Circular definition ?

Quote:
How that interlacing got there is irrelevant. If a VOB file is encoded as 24 fps with soft pulldown, and I index it in DGindex with "honor pulldown flags", bam, it's interlaced now.
Of course it's relevant

24FPS with repeat field flags honored will show combing. It's not interlaced content. The underlying CONTENT does not change. It's reorganized as fields, that's all. The content is still progressive, it did not change

"interlacing" as a term - it's is a terrible description of what you see. You mean "combing"
poisondeathray is online now   Reply With Quote
Old 16th May 2020, 19:31   #216  |  Link
Katie Boundary
Registered User
 
Katie Boundary's Avatar
 
Join Date: Jan 2015
Posts: 1,048
https://en.wikipedia.org/wiki/Deinterlacing

Game over. I win.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers.
Katie Boundary is offline   Reply With Quote
Old 16th May 2020, 19:50   #217  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Katie Boundary View Post
LOL What are you? like 13 ?

If you use that definition, you're wrong again

The film content is progressive. The underlying content does not change when you add pulldown or telecine . i.e you're not starting with interlaced content .

Interlaced content means 59.94 different moments in time represented (or even and odd scan lines come from different moments in time) . Do you have that ? No. You only have 23.976. It's progressive content.
poisondeathray is online now   Reply With Quote
Old 16th May 2020, 20:22   #218  |  Link
Katie Boundary
Registered User
 
Katie Boundary's Avatar
 
Join Date: Jan 2015
Posts: 1,048
You are literally arguing with wikipedia at this point. If you want to do that, I can't stop you, but you'll get very little support from anyone else.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers.
Katie Boundary is offline   Reply With Quote
Old 16th May 2020, 20:51   #219  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Katie Boundary View Post
You are literally arguing with wikipedia at this point. If you want to do that, I can't stop you, but you'll get very little support from anyone else.
Because wikipedia is the ultimate reference and has no errors


I think that definition is not very useful, because it lumps everything under the umbrella of "deinterlacing"

Ok, put it another way - is it better to be more specific, more clear in communication, or vague ?

Since IVTC is a very specific subset if you use that definition, then why not use it when you mean it ?

You could call it "video processing" that's a pretty big umbrella

Last edited by poisondeathray; 16th May 2020 at 21:11.
poisondeathray is online now   Reply With Quote
Old 16th May 2020, 21:27   #220  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Quote:
Originally Posted by Katie Boundary View Post
IVTC is a form of deinterlacing.
That is simply not true (i.e., wrong). IVTC involves the removal of redundant fields and, if the video is viewed directly without re-encoding, IVTC involves zero loss and produces zero artifacts.

By contrast, deinterlacing always involves degradation of the video because you must manufacture fields that were not there in the original. You do this either through duplication, blending, motion estimation, or some other technique.

I am bothering to post this in what has become a thread that is increasingly filled with strange statements because, unfortunately, a lot of people new to dealing with video make the mistake of conflating IVTC and deinterlacing and will often use a deinterlacer on telecined footage and then wonder why they get such horrible results.

So, deinterlacing and IVTC are two completely different things, and you cannot use the tool for one of them to solve the other one's problems. They are orthogonal.

One thing that was correctly stated in this thread by one of the doom9.org experts is that you must do IVTC before applying any temporal filter.

That is 100% correct, and should be obvious.

Why is it obvious? Because if you have a filter that looks at adjacent frames (or, sometimes, a bunch of nearby frames), the total lack of any change whatsoever between some nearby frames or fields, but not others, will completely blow up the algorithms.

As a corollary -- one I found out first-hand when I started encoding VCDs and SVCDs 20+ years ago -- is that encoders have the same problem as temporal filters when you try to encode telecined material without first doing IVTC. In fact, if you try to encode telecined footage, you will need integer multiple larger bitrates to get the same quality. I still remember spending half a day trying to encode the Elton John music video "I'm Still Standing" onto a VCD back in the late 90s. It was shot on film and the capture I made off satellite was telecined. I encoded that telecined video to SVCD and all I could see was "mosquito noise." It was awful. After lots of research and dozens of encodes, I discovered the IVTC built into TMPGEnc. I used it, and the results were a hundred times better (they still look good, even by today's SD standards).
johnmeyer is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 15:01.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.