Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
16th May 2020, 05:39 | #201 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
It should be IVTCing parts that need to be IVTCed (>90%) , and additional filters for problem sections Avisynth is technically CFR only anyways; it's the exported timecodes (timestamps) that make the final output VFR If it's mostly 23.976p, and some 29.97p sections, it's easier. But some people said there were "interlaced" sections; if that' s true, (actual sections with 59.94 different moments in time represented, not fades, not text overlays ), it's harder because 2pass TFM using mode 5 won't pick those up |
|
16th May 2020, 06:58 | #202 | Link | ||
Registered User
Join Date: Jan 2015
Posts: 1,056
|
Quote:
Quote:
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers. |
||
16th May 2020, 08:27 | #204 | Link |
Registered User
Join Date: May 2020
Location: Canada
Posts: 49
|
I'm experimenting with different denoisers for the first season of DS9. I have seen other people advocating for QTGMC as a denoiser run in progressive mode (InputType=1). Is QTGMC run in this fashion still destructive to the resolution of the footage?
Code:
QTGMC(InputType=1, Preset="Medium", EzDenoise=0.1) |
16th May 2020, 08:46 | #205 | Link | |
Registered User
Join Date: Mar 2009
Posts: 3,650
|
From the wiki.
Quote:
Code:
FluxsmoothST(12,4).FluxsmoothT(4).Merge(Last,0.49) Last edited by ryrynz; 16th May 2020 at 09:03. |
|
16th May 2020, 09:25 | #206 | Link | |
Registered User
Join Date: May 2020
Location: Canada
Posts: 49
|
Quote:
Starting to feel burnout and diminishing returns at this point. I might just leave the footage as is, since the differences are subtle. (to my eyes, anyway) |
|
16th May 2020, 11:03 | #207 | Link |
Join Date: Mar 2006
Location: Barcelona
Posts: 5,034
|
It should be about the same speed. As for multiple instances of AVSMeter - RTFM (there's a setting in AVSMeter.ini called "AllowOnlyOneInstance", set it to "0").
__________________
Groucho's Avisynth Stuff Last edited by Groucho2004; 16th May 2020 at 11:16. |
16th May 2020, 15:43 | #208 | Link | ||
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
Deinterlacing (any kind) primarily involves resizing a single field (+/- temporal filtering) . If you use it on progressive content, you lose ~1/2 the resolution of a full progressive frame. A full progressive frame consists of 2 fields from the same moment in time - they just need to be matched and weaved. Double rate deinterlacing should only be used for the 59.94 sections Single rate deinterlacing of any form (or QTGMC in progressive mode for temporal antialiasing) should only be used for problem sections, orphan fields IVTC is used for progressive content. This means primarily field matching to get back the full progressive frames (+ decimation if 23.976p), + /- post processing for residual combing (e.g. orphan field, that single field can get deinterlaced) . If you deinterlace, (single or double) with any method, you will degrade >90% of the content. The main distinction is interlaced vs. progressive content Quote:
Last edited by poisondeathray; 16th May 2020 at 15:56. |
||
16th May 2020, 15:45 | #209 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
It might be appropriate for the 2-3% problem sections, but you will destroy details in >90% of the other scenes Exactly. You degrade the 90-95%. No amount of neural net processing can return the lost details |
|
16th May 2020, 18:25 | #210 | Link | |
Registered User
Join Date: May 2020
Posts: 77
|
Katie,
I feel like you still don't understand what I'm trying to do, here. Quote:
I don't care about the output frame rate. I don't care about the output container. I don't care about the file extension. Why, then, am I mucking around with 23.976 fps footage? Because it exists. Why have I converted to every other frame rate? To see what they look like. To see which gives the best output. Why did I output to 119.88 fps? Because the AviSynth Wiki declares that 119.88 fps output is the most-compatible way to fix judder between 23.976 and 29.97 fps content. I want good-looking video. Coming into this project, I had no idea what I needed to address in order to get it. You keep talking about this project like I have declared: "I am going to create the best 23.976 fps encode of DS9 because it MUST BE IN 23.976. I couldn't give two s**** and a whistle if the final output frame rate is 23.976, 29.97, 35, 42, 49, 60, or 119.88 fps. I don't really care if the output is 1fps, but we play back the content 23.976x faster than normal, if the end result was good-looking footage. I will admit to one practical boundary: I just spent eight days waiting for an RTX 2080 to finish upscaling "Emissary" after I did a 119.88 fps conversion on it but *before* I applied any other filters or processing. I did it once, so I'd be able to compare the output to future attempts to process the footage, but there's no way I'm waiting four days to process each and every episode of the television show. I'd like to keep the total processing time below the 24-hour mark per episode, keeping in mind that VEAI imposes a 10 hour encode time all of its own. The DaVinci Studio runs currently take about two hours, so my current process is a minimum of 12 hours of processing per episode. But that's it. That's all I care about. I'm not wedded to 23.976 fps. I have worked on a 23.976 fps version because my intent is to publish instructions for upscaling the show depending, in part, on what frame rate you want to target. Do you want 23.976 fps? Then I want a workflow for it. I was creating simultaneous comparison shots for 23.976, 29.97, 48, 60, and 119.88 so that people could choose what worked best for them. I'd like nothing more than to be able to tell people: "Use this process to arrive at the best final project frame rate that will look better than[list of options previously enumerated]. Don't bother screwing around with this other stuff. Just do this." But I can't possibly tell people that if I don't know what all of the other speeds look like. If I can find one solution that looks better than anything else, I'll recommend it. If I can't, I'll recommend multiple solutions and show people the outputs so they can choose for themselves. I haven't just explored one frame rate. I've explored like, six frame rates simultaneously. I don't care which one of them the final project uses. I care which one of them makes the final project look the best. Last edited by JoelHruska; 16th May 2020 at 18:29. |
|
16th May 2020, 19:00 | #211 | Link | |
Registered User
Join Date: Jan 2015
Posts: 1,056
|
I just want to be clear. By retrograde field behavior, we're talking about content that, when run through separatefields().doubleweave() to simulate what it looks like on an NTSC TV, looks perfectly fine, aside from the fact that it's interlaced:
https://imgur.com/a/b62ndpK But, when you look through it one field at a time using a dumb, temporally naive, double-rate bob-deinterlacer like bob(), it's obvious that the fields are NOT in the right order: https://imgur.com/a/JqHkaHL Joel, can you confirm that this kind of pathological content exists in DS9? Quote:
No, I keep talking about this project like you're willing to pay any price and introduce any new problem to get rid of judder, and I just pointed out that decimating to 23.976 is the least ideal solution you've experimented with.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers. Last edited by Katie Boundary; 16th May 2020 at 19:03. |
|
16th May 2020, 19:11 | #212 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
Interlaced content means each field represents a different moment in time. 59.94 different moments in time represented The problem is you're mixing up Inverse telecine or pulldown removal with Deinterlacing. These are different things. IVTC is NOT a form of deinterlacing as you said, because it's used when underlying content is progressive. So if you want to use your own definition, you are wrong ("Deinterlacing is literally any process that converts interlaced content into progressive content") Since >90% of this is progressive content, you should not deinterlace as you suggested |
|
16th May 2020, 19:15 | #213 | Link | |||
Registered User
Join Date: Jan 2015
Posts: 1,056
|
Quote:
Quote:
Quote:
I'm sorry but your definitions are wrong. Deinterlacing is any process that converts interlaced content to progressive. Bobbing is a form of deinterlacing. Field-matching (which is the first step of IVTC) is also deinterlacing. Blur(1.0) is a form of deinterlacing.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers. Last edited by Katie Boundary; 16th May 2020 at 19:23. |
|||
16th May 2020, 19:22 | #214 | Link | ||||||
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
Quote:
Quote:
Quote:
Field matching , as the name suggests is for progressive content. 2 fields from the same moment in time Interlaced content has only 1/2 the spatial resolution in motion. Single fields. Quote:
Quote:
|
||||||
16th May 2020, 19:27 | #215 | Link | ||
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
Quote:
24FPS with repeat field flags honored will show combing. It's not interlaced content. The underlying CONTENT does not change. It's reorganized as fields, that's all. The content is still progressive, it did not change "interlacing" as a term - it's is a terrible description of what you see. You mean "combing" |
||
16th May 2020, 19:50 | #217 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
If you use that definition, you're wrong again The film content is progressive. The underlying content does not change when you add pulldown or telecine . i.e you're not starting with interlaced content . Interlaced content means 59.94 different moments in time represented (or even and odd scan lines come from different moments in time) . Do you have that ? No. You only have 23.976. It's progressive content. |
|
16th May 2020, 20:22 | #218 | Link |
Registered User
Join Date: Jan 2015
Posts: 1,056
|
You are literally arguing with wikipedia at this point. If you want to do that, I can't stop you, but you'll get very little support from anyone else.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers. |
16th May 2020, 20:51 | #219 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
I think that definition is not very useful, because it lumps everything under the umbrella of "deinterlacing" Ok, put it another way - is it better to be more specific, more clear in communication, or vague ? Since IVTC is a very specific subset if you use that definition, then why not use it when you mean it ? You could call it "video processing" that's a pretty big umbrella Last edited by poisondeathray; 16th May 2020 at 21:11. |
|
16th May 2020, 21:27 | #220 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
That is simply not true (i.e., wrong). IVTC involves the removal of redundant fields and, if the video is viewed directly without re-encoding, IVTC involves zero loss and produces zero artifacts.
By contrast, deinterlacing always involves degradation of the video because you must manufacture fields that were not there in the original. You do this either through duplication, blending, motion estimation, or some other technique. I am bothering to post this in what has become a thread that is increasingly filled with strange statements because, unfortunately, a lot of people new to dealing with video make the mistake of conflating IVTC and deinterlacing and will often use a deinterlacer on telecined footage and then wonder why they get such horrible results. So, deinterlacing and IVTC are two completely different things, and you cannot use the tool for one of them to solve the other one's problems. They are orthogonal. One thing that was correctly stated in this thread by one of the doom9.org experts is that you must do IVTC before applying any temporal filter. That is 100% correct, and should be obvious. Why is it obvious? Because if you have a filter that looks at adjacent frames (or, sometimes, a bunch of nearby frames), the total lack of any change whatsoever between some nearby frames or fields, but not others, will completely blow up the algorithms. As a corollary -- one I found out first-hand when I started encoding VCDs and SVCDs 20+ years ago -- is that encoders have the same problem as temporal filters when you try to encode telecined material without first doing IVTC. In fact, if you try to encode telecined footage, you will need integer multiple larger bitrates to get the same quality. I still remember spending half a day trying to encode the Elton John music video "I'm Still Standing" onto a VCD back in the late 90s. It was shot on film and the capture I made off satellite was telecined. I encoded that telecined video to SVCD and all I could see was "mosquito noise." It was awful. After lots of research and dozens of encodes, I discovered the IVTC built into TMPGEnc. I used it, and the results were a hundred times better (they still look good, even by today's SD standards). |
|
|