Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 29th May 2020, 21:22   #481  |  Link
zapp7
Registered User
 
Join Date: May 2020
Location: Canada
Posts: 49
Quote:
Originally Posted by Stereodude View Post
z_ConvertFormat(width=2880, height=2160, resample_filter="bicubic", pixel_type="YUV420P16", colorspace_op="rgb:srgb:170m:f=>709:709:709:f")
Just curious, would there be any advantage to using YUV444P16 instead of YUV420P16 in this workflow?
zapp7 is offline   Reply With Quote
Old 29th May 2020, 22:39   #482  |  Link
Stereodude
Registered User
 
Join Date: Dec 2002
Location: Region 0
Posts: 1,436
Quote:
Originally Posted by zapp7 View Post
Just curious, would there be any advantage to using YUV444P16 instead of YUV420P16 in this workflow?
You've got to get to 420 at some point, and this presumably is basically at the end of the processing flow aside from going to 10-bits for HEVC encode.

I'd argue it's best to only scale it once instead in of multiple steps.
Stereodude is offline   Reply With Quote
Old 30th May 2020, 18:05   #483  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 454
Quote:
Originally Posted by ReinerSchweinlin View Post
It was present in an early version and there was a help file about it, then they removed the feature end re-introduced it some versions ago - but the help file is missing now, I just looked up.. I will recover some of the old betas and search for it and post it here, give me some time, got work to do...
Here you go. Itīs from an older version, so selecting new models isnīt included yet. Iīve asked Topaz for a newer version, waiting for a response.
BTW, they released a new beta with a new model yesterday.
ReinerSchweinlin is offline   Reply With Quote
Old 30th May 2020, 18:07   #484  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 454
Quote:
Originally Posted by Stereodude View Post
I'd argue it's best to only scale it once instead in of multiple steps.
I am not sure if we are talking about the same "steps", but in early versions of VEAI, sometimes processing the Video first with a 100% "cleanup" pass helped quite a lot.
ReinerSchweinlin is offline   Reply With Quote
Old 30th May 2020, 19:29   #485  |  Link
JoelHruska
Registered User
 
Join Date: May 2020
Posts: 77
John,

For decades, the idea of "Enhance" -- as in, the recovery of detail levels over and above what was originally encoded in the source -- was a joke. Completely farcical. You're not wrong.

AI-based upscalers have turned science fiction into reality. There are different models that process the final image in different ways. Topaz Video Enhance AI is very much a work in progress.

If you want to see an example of the same video scene run through an upscaler without using YouTube as an intermediary, here:

This link is to samples Hello_Hello provided of some DVD clips I gave him:

https://www.sendspace.com/file/gwxxi0

This link is where you can download upscaled versions of those samples:

https://www.sendspace.com/filegroup/...8pntkJJg0KsgtR

The SFE-1 and SFE-2 videos are upscaled using two different models -- Gaia-CG and Gaia-HQ, both available in Topaz Video Enhance AI.

You can see the significant level of improvement for yourself. You may not like it, of course, but you will not fail to see the difference.
JoelHruska is offline   Reply With Quote
Old 30th May 2020, 21:01   #486  |  Link
Stereodude
Registered User
 
Join Date: Dec 2002
Location: Region 0
Posts: 1,436
Quote:
Originally Posted by ReinerSchweinlin View Post
I am not sure if we are talking about the same "steps", but in early versions of VEAI, sometimes processing the Video first with a 100% "cleanup" pass helped quite a lot.
I mean mean taking the output from Topaz and getting it into 2880x2160 YUV420. 444 to 420 requires scaling the chroma. You've also got to turn 3840x2160 into 2880x2160. It's probably best to do that in only 1 operation instead of compounding multiple separate steps that each can degrade the image.
Stereodude is offline   Reply With Quote
Old 30th May 2020, 21:18   #487  |  Link
zapp7
Registered User
 
Join Date: May 2020
Location: Canada
Posts: 49
Quote:
Originally Posted by Stereodude View Post
I mean mean taking the output from Topaz and getting it into 2880x2160 YUV420. 444 to 420 requires scaling the chroma. You've also got to turn 3840x2160 into 2880x2160. It's probably best to do that in only 1 operation instead of compounding multiple separate steps that each can degrade the image.
Here is my code to handle the output of 16 bit tiffs from Topaz VEAI. Should the TemporalDegrain2 come before or after the z_ConvertFormat step?

Code:
#outfolder is the topaz output directory, filecnt is # of images
ImageSource(file=outfolder+"\%06d.tiff",start=0,end=FileCnt-1,fps=23.976,pixel_type="RGB48")
ConvertToPlanarRGB()
z_ConvertFormat(width=2880, height=2160, resample_filter="bicubic",pixel_type="YUV420P16", colorspace_op="rgb:srgb:170m:f=>709:709:709:f")
TemporalDegrain2(grainLevel=false)
neo_f3kdb(range=31, grainY=15, grainC=10, sample_mode=2, dither_algo=3, dynamic_grain=true, keep_tv_range=false, output_depth=10)
zapp7 is offline   Reply With Quote
Old 30th May 2020, 21:45   #488  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 454
Quote:
Originally Posted by Stereodude View Post
I mean mean taking the output from Topaz and getting it into 2880x2160 YUV420. 444 to 420 requires scaling the chroma. You've also got to turn 3840x2160 into 2880x2160. It's probably best to do that in only 1 operation instead of compounding multiple separate steps that each can degrade the image.
Ah ok, I get what you mean. VEAI internaly only can upscale 2x, 4x, 8x - so itīs probably best in terms of quality to not use "1080p" preset, but rather output 2x or 4x in 16Bit TIFF, then do one downscale/chroma/encode pass... Internaly, if a preset with a fixed resolution is set in VEAI, it uses ffmpeg to downscale from the AI Output.
ReinerSchweinlin is offline   Reply With Quote
Old 30th May 2020, 21:50   #489  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 454
Quote:
Originally Posted by JoelHruska View Post
For decades, the idea of "Enhance" -- as in, the recovery of detail levels over and above what was originally encoded in the source -- was a joke. Completely farcical. You're not wrong.
Iīd like to add that it is important to understand, that most upscaling techniques actually "make stuff up" - so what we war used to see in movies ("real" details recovered by som e magic) actually is a little different. There are some apporaches to get a little detail back (also see my other post), like this one:
https://github.com/jiangsutx/SPMC_VideoSR
https://www.youtube.com/watch?v=0WnwS1EOx3M

But these have to be differentiated from the "repainting" models...
ReinerSchweinlin is offline   Reply With Quote
Old 31st May 2020, 00:34   #490  |  Link
JoelHruska
Registered User
 
Join Date: May 2020
Posts: 77
Reiner,

What would you call an approach like ESRGAN? According to the description of the model:

"how do we recover the finer texture details when we super-resolve at large upscaling factors? ... Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks."

They use phrases like "recover" as opposed to "approximate" or "repaint." Are they obfuscating their own approach, or are they one of the models that actually recovers data?
JoelHruska is offline   Reply With Quote
Old 31st May 2020, 01:17   #491  |  Link
Stereodude
Registered User
 
Join Date: Dec 2002
Location: Region 0
Posts: 1,436
Quote:
Originally Posted by ReinerSchweinlin View Post
Ah ok, I get what you mean. VEAI internaly only can upscale 2x, 4x, 8x - so itīs probably best in terms of quality to not use "1080p" preset, but rather output 2x or 4x in 16Bit TIFF, then do one downscale/chroma/encode pass... Internaly, if a preset with a fixed resolution is set in VEAI, it uses ffmpeg to downscale from the AI Output.
Oh, I didn't realize that. They would definitely want to do a 4x or 8x then (depending on whether 1080p or UHD is the output target). Perhaps a sharper downsampling kernel that avoids ringing should be used with it. SSIM 2D in madVR looks very good, but I don't know if there's an Avisynth equivalent of that.
Stereodude is offline   Reply With Quote
Old 31st May 2020, 04:58   #492  |  Link
hello_hello
Registered User
 
Join Date: Mar 2011
Posts: 4,823
Well I'll confess after trying some upscaling with Avisynth, I'm now starting to wonder just how clever the clever upscaling is by comparison.

I took the original lossless h264 DVD samples and upscaled them to 4k in Avisynth. I had to do it using 2 processes and hobble x264 a little, otherwise my poor old XP machine would run out of memory, but here's the files in the zip file and how they were created etc.

SFE-1 nnedi3 4k rec.709.mkv
SFE-2 nnedi3 4k rec.709.mkv

Both upscaled using the following script. I forgot the original upscales weren't color converted so the colors are a little different, but that's not what the comparisons were about.
There may be better ways to sharpen than with LSFMod, but without a lot of experimenting, it's the only sharpening method I trust not to look horrible.
There's only a frame or two of live action in each sample, but it looks pretty obvious that using the method below would require the live secions to either be denoised first, or sharpened far less so the result isn't sharpened noise, however this was primarily to look at how nnedi3 would compare for the CGI.

Quote:
MP_Pipeline("""
LoadPlugin("C:\Program Files\MeGUI\tools\lsmash\LSMASHSource.dll")
LWLibavVideoSource("D:\SFE-1.mkv.lwi")
AssumeTFF()
ColorMatrix(mode="Rec.601->Rec.709", clamp=0)
Crop(8,0,-8,0)
TFM(pp=5, micmatching=0).TDecimate()
QTGMC(InputType=1,TR2=3,Preset="Slower",ShutterBlur=3,ShutterAngleSrc=180,ShutterAngleOut=180,SBlurLimit=8)
LSFMod(Strength=200)
### prefetch: 16, 0
### ###
""")
nnedi3_rpow2(rfactor=8, cshift="Spline64Resize", fwidth=2880, fheight=2160)
LSFMod(Strength=200)
JoelHruska's upscales
SFE-1_4.00x_2560x1920_Gaia-CG.mp4
SFE-2_4.00x_2560x1920_Gaia-CG.mp4

The included screenshots are the same frame from SFE-1 (the nnedi3 screenshots are directly from the Avisynth output (no-re-encoding) so any advantage they have in terms of accuracy has now been declared), and the versions upscaled by JoelHruska are upscaled from my IVTC'd SD encodes of the original source, so that's their disadvantage declared too.

The screenshots with 4K in the name are the full upscaled frame. The screenshots not labelled 4k were taken using the PrintScreen button with MPC-HC only displaying 1080p worth of the frame on my 1080p monitor. That way the 1080p screenshots can be compared without the need for resizing on a 1080p monitor. Because I forgot about the color correction thing when encoding, the nnedi3 screenshots include a version that wasn't color converted to rec.709.

The samples upscaled by JoelHruska are also included because I was starting to get confused with the multiple sample uploads myself, so the nnedi3 and "Gaia-CG" upscales are both included.

The clever upscaling does clean up the line wobbling/aliasing more, but the rest, I'm not so sure it's better. Maybe there's a better anti-aliasing filter than QTGMC that could be used first, before upscaling with Avisynth?

nnedi3 comparison.zip
(79.8 MB)

Last edited by hello_hello; 31st May 2020 at 05:18.
hello_hello is offline   Reply With Quote
Old 31st May 2020, 09:02   #493  |  Link
Katie Boundary
Registered User
 
Katie Boundary's Avatar
 
Join Date: Jan 2015
Posts: 1,048
Quote:
Originally Posted by scharfis_brain View Post
Is 3840x2160 really necessary?
No, and in fact it's the wrong aspect ratio.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers.
Katie Boundary is offline   Reply With Quote
Old 1st June 2020, 01:47   #494  |  Link
zapp7
Registered User
 
Join Date: May 2020
Location: Canada
Posts: 49
Quote:
Originally Posted by Stereodude View Post
Oh, I didn't realize that. They would definitely want to do a 4x or 8x then (depending on whether 1080p or UHD is the output target). Perhaps a sharper downsampling kernel that avoids ringing should be used with it. SSIM 2D in madVR looks very good, but I don't know if there's an Avisynth equivalent of that.
It seems that VEAI can't actually upscale by 8x. The maximum it will accept for me is 6x. So it looks like to do 4K with this workflow, there is a minimum of 2 resampling steps to the VEAI output.
zapp7 is offline   Reply With Quote
Old 1st June 2020, 02:57   #495  |  Link
JoelHruska
Registered User
 
Join Date: May 2020
Posts: 77
Zapp7,

Are you going to crop it for 16:9 aspect ratios?
JoelHruska is offline   Reply With Quote
Old 1st June 2020, 04:09   #496  |  Link
zapp7
Registered User
 
Join Date: May 2020
Location: Canada
Posts: 49
Quote:
Originally Posted by JoelHruska View Post
Zapp7,

Are you going to crop it for 16:9 aspect ratios?
No, I'm sticking with the 4:3 ratio.
zapp7 is offline   Reply With Quote
Old 1st June 2020, 11:42   #497  |  Link
Stereodude
Registered User
 
Join Date: Dec 2002
Location: Region 0
Posts: 1,436
Quote:
Originally Posted by zapp7 View Post
It seems that VEAI can't actually upscale by 8x. The maximum it will accept for me is 6x. So it looks like to do 4K with this workflow, there is a minimum of 2 resampling steps to the VEAI output.
I'm not sure I'm following you. 6x would give you 4320x2880 RGB 16-bit tiff files. You can turn that into 2880x2160 YUV420 in a single scaling step.
Stereodude is offline   Reply With Quote
Old 1st June 2020, 16:41   #498  |  Link
zapp7
Registered User
 
Join Date: May 2020
Location: Canada
Posts: 49
Quote:
Originally Posted by Stereodude View Post
I'm not sure I'm following you. 6x would give you 4320x2880 RGB 16-bit tiff files. You can turn that into 2880x2160 YUV420 in a single scaling step.
Reiner mentioned up-thread that VEAI internally upscales only to 2x, 4x or 8x. Based on that it's my understanding that if I were to upscale by 6x, Topaz would internally upscale 8x and use ffmpeg to downscale to 6x. I would then have to downscale again to 2160p in a second resampling step.

Maybe I misunderstood and it actually can scale by 6x internally?

Also, I just found out that Topaz VEAI can be invoked from the command line. If invoked with -? or -h, it will show a list of all available arguments. I haven't tested it yet, but this looks promising for incorporating VEAI into a batch script!
zapp7 is offline   Reply With Quote
Old 1st June 2020, 22:03   #499  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 454
Quote:
Originally Posted by zapp7 View Post
Reiner mentioned up-thread that VEAI internally upscales only to 2x, 4x or 8x. Based on that it's my understanding that if I were to upscale by 6x, Topaz would internally upscale 8x and use ffmpeg to downscale to 6x. I would then have to downscale again to 2160p in a second resampling step.

Maybe I misunderstood and it actually can scale by 6x internally?

Also, I just found out that Topaz VEAI can be invoked from the command line. If invoked with -? or -h, it will show a list of all available arguments. I haven't tested it yet, but this looks promising for incorporating VEAI into a batch script!
Iīd have to check if the newest model can do other than 2x, 4x, 8x scale factors - not sure at the moment. The gaia and artemis are "factor of 2".
CLI can be used for batches, but itīs incomplete at the moment, some stuff missing. Itīs on the roadmap of TOPAZ, hopefully theyīll stick with it and clean this feature up.
ReinerSchweinlin is offline   Reply With Quote
Old 1st June 2020, 22:34   #500  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 454
Quote:
Originally Posted by JoelHruska View Post
Reiner,

What would you call an approach like ESRGAN? According to the description of the model:

"how do we recover the finer texture details when we super-resolve at large upscaling factors? ... Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks."

They use phrases like "recover" as opposed to "approximate" or "repaint." Are they obfuscating their own approach, or are they one of the models that actually recovers data?
Joel, Iīll write a more thorrough text when I get back from a project, but I donīt want to leave without a short comment:
"recover" is a fuzzy term - ESRGAN actually is one of the "we make stuff up" methods (looks good, but is not original, but pleasing to the viewer (in case of video - more about this later)). There are some, which CAN reveal some detail by looking at multiple frames. Itīs best - in my oppinion - to combine both (VEIA actually does combine both methods to a certain extend - but Iīd love to see a selection on what to turn on or off)..
More when I come back..
ReinerSchweinlin is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 03:20.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.