Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
![]() |
#1 | Link | |
Registered User
Join Date: May 2025
Posts: 5
|
How optimised is my script for batch encoding?
I'm about to batch filter many Japanese broadcast tapes, there will be slight tweaks with regards to TFF/BFF, and how much each edge will be trimmed.
I'm worried that I may be missing a step. How do my scripts look? What I'm trying to achieve: 1. Frame rate doubled to 59.94fps for smooth playback (retaining both live action and animation clarity) 2. Slight noise reduction 3. Remove junk noise and empty space through cropping 4. Replace the cropped pixels with black pixels, retaining original dimensions of 720 x 480 5. Center the picture after cropping, for image symmetry 6. Encode at a faster speed (prefetch) 7. Encode at high-lossy with no visible difference to lossless 8. Encode at correct aspect ratio of 4:3 I'm also going to archive the raw captures as FFVI and FLAC. Does this seem sensible, and would it safe to convert these raw captures, and THEN encode, without loss of quality? I'm also looking for recommendations on techniques for up-scaling correctly to 4K for YouTube, again without losing quality. Not sure how much of a difference it would make to upscale from raw source vs. encode. My flitering script: Code:
Import("TemporalDegrain-v2.6.6.avsi") # Load source video_audio = FFmpegSource2("1_New_Raw_Sample_2.avi", atrack=-1, fpsnum=30000, fpsden=1001).AssumeTFF() # QTGMC at 59.94 fps processed = QTGMC(video_audio, preset="Slower") # Crop after deinterlacing crop_left = 4 crop_top = 4 crop_right = 12 crop_bottom = 4 cropped = processed.crop(crop_left, crop_top, -crop_right, -crop_bottom) # Denoise and sharpen in YV12 (this is to avoid unnecessary format switching) denoised = cropped.TemporalDegrain2(degrainTR=1) # Reconvert to YV12 due to TempDegrain output sharpened = denoised.ConvertToYV12().LSFmod(defaults="slow") # Restore borders border_left = Round((crop_left + crop_right) / 2) border_right = (crop_left + crop_right) - border_left border_top = Round((crop_top + crop_bottom) / 2) border_bottom = (crop_top + crop_bottom) - border_top restored = sharpened.AddBorders(border_left, border_top, border_right, border_bottom) # Output with prefetch return restored.Prefetch(16) Quote:
|
|
![]() |
![]() |
![]() |
#2 | Link |
Registered User
Join Date: Oct 2024
Location: Nebula 71 Star
Posts: 53
|
I recommend looking into TemporalDegrain2_Fast by Arx1meD: https://forum.doom9.org/showthread.p...94#post1982594
|
![]() |
![]() |
![]() |
#3 | Link | |
Registered User
Join Date: May 2025
Posts: 5
|
Quote:
The only thing is that now the scripting is quite different (Degrain2Fast doesnt understand degrainTR=1), so I've just had to go with the default filtering setting. I can see that the filtering is too heavy on standard settings on TDegrain2_Fast, notice the rounding of edges on the Playstation logo: https://imgsli.com/Mzc3NzQ4 Any filter settings you would recommend? |
|
![]() |
![]() |
![]() |
#4 | Link | |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 11,164
|
Quote:
Code:
function TemporalDegrain2_fast (clip input, int "Strength", int "Y", int "U", int "V", int "RadT", int "BlkSz", int "Olap", \ float "Sharp", bool "PostDeHalo", float "PostMix") { Str = Default(Strength, 3) # Noise/grain suppression strength. # Strength for depth noise (< 2), low (3 ... 5), medium (6 ... 9), high (10 ... 14), veryhigh (> 15) Y = Default(Y, 3) # Luma plane to process. Value: 2 - copy from input, 3 - process U = Default(U, Y) # Chroma plane to process. Value: 2 - copy from input, 3 - process V = Default(V, U) # Chroma plane to process. Value: 2 - copy from input, 3 - process RadT = Default(RadT, 1) # Temporal Radius of frame analysis. Value 1 or 2 BlkSz = Default(BlkSz, 16) # Block size for motion analysis. Bigger BlkSz quicker. Recommended values: 8, 16, 32 OLap = Default(OLap, BlkSz/2) # The value of overlapping blocks on each other Sharp = Default(Sharp, 0.4) # Sharpening strength. Range: 0 ... 1 DeHalo = Default(PostDeHalo, false) # Remove halo after sharpening. Value: true or false PostMix = Default(PostMix, 0) # How much noise/grain will be returned. Range: 0 ... 1 Code:
# ... STRENGTH = 2 # Default is 3, reduce a bit RADT=1 # Default is 1 anyway, cannot reduce, is minimum radius. POSTMIX=0.0 # Default 0.0, How much noise/grain will be returned. Range: 0 ... 1 # extra stuff SHARP=0.4 # Default 0.4, maybe reduce to 0.3 if too sharp POSTDEHALO=false # Default false, make true if dehalo-ing. # ... denoised = cropped.TemporalDegrain2_fast (Strength=STRENGTH,RadT=RADT,Sharp=SHARP,PostDehalo=POSTDEHALO,PostMix=POSTMIX) # ...
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? Last edited by StainlessS; 7th May 2025 at 19:49. |
|
![]() |
![]() |
![]() |
#5 | Link | |
Registered User
Join Date: May 2025
Posts: 5
|
Quote:
Would you have any other recommendations outside of noise removal? I'm thinking about upscaling to an HD/4K res as an encode for YouTube viewing too. Is there any benefit to upscaling from the RAW rather, or would there be no difference in quality if upscaled from the SD encode correctly? |
|
![]() |
![]() |
![]() |
#7 | Link | |
Registered User
Join Date: May 2025
Posts: 5
|
Quote:
I'm aware that the actual encode wouldn't be more enhanced in HD/4K than SD, and would likely look identical. |
|
![]() |
![]() |
![]() |
#8 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 11,164
|
I've never uploaded anything to YT, hence, I am staying out of this
![]()
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? |
![]() |
![]() |
![]() |
#9 | Link |
Registered User
Join Date: Feb 2021
Posts: 133
|
waxstone, if you have a good video card, try the next settings for Temporaldegran2:
Code:
TemporalDegrain2(grainLevel=1, limitFFT=2, postFFT=4, degrainPlane=4, postDither=0) # for GPU Code:
function TemporalDegrain2_fast (clip input, int "Strength", int "Y", int "U", int "V", int "RadT", int "BlkSz", int "Olap", \ float "Sharp", bool "PostDeHalo", float "PostMix", bool "GPU") { Str = Default(Strength, 3) # Noise/grain suppression strength. # Strength for depth noise (< 2), low (3 ... 5), medium (6 ... 9), high (10 ... 14), veryhigh (> 15) Y = Default(Y, 3) # Luma plane to process. Value: 2 - copy from input, 3 - process U = Default(U, Y) # Chroma plane to process. Value: 2 - copy from input, 3 - process V = Default(V, U) # Chroma plane to process. Value: 2 - copy from input, 3 - process RadT = Default(RadT, 1) # Temporal Radius of frame analysis. Value 1 or 2 BlkSz = Default(BlkSz, 16) # Block size for motion analysis. Bigger BlkSz quicker. Recommended values: 8, 16, 32 OLap = Default(OLap, BlkSz/2) # The value of overlapping blocks on each other Sharp = Default(Sharp, 0.4) # Sharpening strength. Range: 0 ... 1 DeHalo = Default(PostDeHalo, false) # Remove halo after sharpening. Value: true or false PostMix = Default(PostMix, 0) # How much noise/grain will be returned. Range: 0 ... 1 GPU = Default(GPU, false) func_name = "TemporalDegrain2_fast: " Assert(Y == 2 || Y == 3, func_name+"Luma Y plane must be 2 or 3") Assert(U == 2 || U == 3, func_name+"Chroma U plane must be 2 or 3") Assert(V == 2 || V == 3, func_name+"Chroma V plane must be 2 or 3") Assert(Y == 3 || U == 3 || V == 3, func_name+"One of the planes Y, U, V must be 3") Assert(RadT == 1 || RadT == 2, func_name+"Temporal Radius of frame analysis must be 1 or 2") Assert(Sharp >= 0 && Sharp <= 1, func_name+"Sharpening strength must be between 0 and 1.0") Assert(PostMix >= 0 && PostMix <= 1, func_name+"The noise return value must be between 0 and 1.0") dPlane = Y==3 && (U==3 || V==3) ? 4 \ : Y==3 && U==2 && V==2 ? 0 \ : Y==2 && U==3 && V==2 ? 1 \ : Y==2 && U==2 && V==3 ? 2 \ : Y==2 && U==3 && V==3 ? 3 : 4 pad = Max(Blksz, 8) pel = 1 chr = U==3 || V==3 ? true : false # denoising 1st way dgLimit = GPU ? input.FFT3DGPU(sigma=Str, sigma2=Str*0.625, sigma3=Str*0.375, sigma4=Str*0.250, \ bt=RadT==1?3:5, plane=0, bw=BlkSz*2, bh=BlkSz*2, ow=OLap, oh=OLap) \ : input.neo_fft3d(sigma=Str, sigma2=Str*0.625, sigma3=Str*0.375, sigma4=Str*0.250, \ bt=RadT==1?3:5, Y=Y, U=U, V=V, bw=BlkSz*2, bh=BlkSz*2, ow=OLap, oh=OLap) dgSpatD = mt_makediff(input, dgLimit, Y=Y, U=U, V=V) # denoising 2nd way dgNR1 = GPU ? dgLimit.KNLMeansCL(h=0.1+Str/5.0, d=RadT, a=2, s=4, channels="Y", device_type="GPU") \ : dgLimit.vsDeGrainMedian(modeY=0, limitY=Str, limitU=U==3?Str+2:0, limitV=V==3?Str+2:0) dgNR1D = mt_makediff(input, dgNR1, Y=Y, U=U, V=V) # combine 1st and 2nd ways dgDD = mt_lutxy(dgSpatD, dgNR1D, "x range_half - abs y range_half - abs < x y ?", Y=Y, U=U, V=V, use_expr=2) dgNR1x = mt_makediff(input, dgDD, Y=Y, U=U, V=V) # sharpen the edges only dgNR1x = Sharp > 0 ? mt_merge(dgNR1x, \ dgNR1x.RemoveGrain(17).Sharpen(Sharp), \ dgNR1x.RemoveGrain(12).mt_edge("prewitt", Y=3, U=2, V=2).mt_inpand(chroma="-128").Blur(1.58), \ Y=3, U=2, V=2) \ : dgNR1x # denoising 3rd way sup = dgNR1x.Blur(1.58).Blur(1.58).MSuper(hpad=pad, vpad=pad, pel=pel, chroma=chr) MultiVec = MAnalyse(sup, multi=true, delta=RadT, blksize=BlkSz, overlap=OLap, search=5, dct=7, chroma=chr, truemotion=false, global=true) dgNR1xS = MSuper(dgNR1x, hpad=pad, vpad=pad, pel=pel, levels=1, chroma=chr) dgNR2 = MDegrainN(dgNR1x, dgNR1xS, MultiVec, RadT, plane=dPlane) # combine 1st, 2nd and 3rd ways dgDD2 = mt_lutxy(dgNR1x, dgNR2, "x range_half - abs y range_half - abs < x y ?", Y=Y, U=U, V=V, use_expr=2) # sharpening allD = Sharp > 0 ? mt_makediff(input.Sharpen(1), dgDD2.Blur(1.58).Blur(1.58)) : NOP() ssD = Sharp > 0 ? mt_makediff(dgDD2, dgDD2.RemoveGrain(20)) : NOP() ssDD = Sharp > 0 ? mt_lutxy(ssD.Repair(allD, 12), ssD, "x range_half - abs y range_half - abs < x y ?", Y=Y, U=U, V=V, scale_inputs="allf") : NOP() out = Sharp > 0 ? mt_lutxy(dgDD2, ssDD, "x range_half y - "+String(Sharp)+" * -", Y=3, U=2, V=2, scale_inputs="allf") : dgDD2 # dehaloing m0 = DeHalo ? dgNR1x.mt_edge("prewitt", Y=3, U=2, V=2).mt_inpand(chroma="-128") : NOP() m1 = DeHalo ? mt_lutxy(out, m0, "y range_half > x y ?", Y=3, chroma="-128", scale_inputs="allf").mt_binarize(threshold=128).Blur(0.5) : NOP() out = DeHalo ? mt_merge(out, dgNR1x.Blur(0.1), m1, Y=3, U=2, V=2) : out PostMix > 0 ? mt_lutxy(out, input, "x x y - "+String(PostMix)+" * -", Y=Y, U=U, V=V, use_expr=2) : out } Code:
TemporalDegrain2_fast(Strength=3, Y=3, U=3, V=3, RadT=1, BlkSz=16, Sharp=0.4, PostDeHalo=false, PostMix=0, GPU=true) |
![]() |
![]() |
![]() |
#10 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 11,164
|
Arx1meD, Your GPU arg is new compared to the script we were looking at.
Does your script have its own thread, if so, then where ? (It really should have its own thread) I did a D9 search on "TemporalDegrain2_fast", both as thread title only, and also anywhere in any post, and found NOTHING!!! (not even in this thread, whats wrong with D9 search) EDIT: Although we also did Google search on Code:
"TemporalDegrain2_fast" site:forum.doom9.org https://www.google.co.uk/search?q=%2...client=gws-wiz .
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? Last edited by StainlessS; 8th May 2025 at 21:31. |
![]() |
![]() |
![]() |
#12 | Link |
Registered User
Join Date: Jul 2018
Posts: 1,324
|
After FFT3D and KNLmeansCL executed on GPU - the part of mvtools can also be tested for ME at GPU using that version of mvtools2 - https://forum.doom9.org/showthread.php?t=183517
|
![]() |
![]() |
![]() |
#13 | Link |
Acid fr0g
Join Date: May 2002
Location: Italy
Posts: 2,944
|
If you have a nVidia card, unless some personal requirement, I strongly suggest you to use modern filters such as BM3D_CUDA.
If you carefully tune the parameters, it can outperform both in speed and quality MVTools and have less artifacts too.
__________________
@turment on Telegram |
![]() |
![]() |
![]() |
#14 | Link | |
Registered User
Join Date: May 2025
Posts: 5
|
Quote:
I was also confused as to why this doesn't have its own thread. Although I'm happy with the increased speed of my encodes, when you say it cleans "worse" than the original it doesn't really fill me with confidence. Why this over the above mentioned BM3CUDA method? Last edited by waxstone; 9th May 2025 at 22:52. |
|
![]() |
![]() |
![]() |
#15 | Link | |
Acid fr0g
Join Date: May 2002
Location: Italy
Posts: 2,944
|
Quote:
[Your source] z_ConvertFormat(resample_filter="Spline64", pixel_type="yuv444ps") BM3D_CUDA(sigma=6, radius=3, chroma=true, block_step=6, bm_range=12, ps_range=6) BM3D_VAggregate(radius=3) z_ConvertFormat(resample_filter="spline64",dither_type="error_diffusion",pixel_type="YUV420P16") fmtc_bitdepth (bits=10,dmode=8) Prefetch(2,6) But you have to manually tune the parameters according to your necessities and GPU memory. Read the avisynth+ wiki about BM3D_CUDA about it.
__________________
@turment on Telegram |
|
![]() |
![]() |
![]() |
#16 | Link | |
Registered User
Join Date: Oct 2024
Location: Nebula 71 Star
Posts: 53
|
Quote:
Another solution would be to upload the sd clip and link a download to the high quality version |
|
![]() |
![]() |
![]() |
Tags |
59.94, batch, broadcast, encoding, qtgmc |
Thread Tools | Search this Thread |
Display Modes | |
|
|