Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th August 2017, 05:15   #1  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,234
mClean spatio/temporal denoiser v3.2 (01 March 2018)

mClean by burfadel

Changelog: https://forum.doom9.org/showpost.php...46&postcount=3
Dependencies: https://forum.doom9.org/showpost.php...&postcount=334

Code:
# mClean spatio/temporal denoiser
# Version: 3.2 (01 March 2018)
# By burfadel

#  +++ Description +++
# Typical spatial filters work by removing large variations in the image on a small scale, reducing noise but also making the image less
# sharp or temporally stable. mClean removes noise whilst retaining as much detail as possible, as well as provide optional image enhancement

# mClean works primarily in the temporal domain, although there is some spatial limiting
# Chroma is processed a little differently to luma for optimal results
# Input must be 8-bit Planar type (YV12, YV16, YV24) or their equivalents in 10, 12, 14, or 16 bits
# Chroma processing can be disabled with chroma=false

#  +++ Artifacts +++
# Spatial picture artifacts may remain as removing them is a fine balance between removing the unwanted artifact whilst not removing detail
# Additional dering/dehalo/deblock filters may be required, but should ONLY be uses if required due the detail loss/artifact removal balance

#  +++ Sharpening +++
# Applies a modified unsharp mask to edges and major detected detail. Range of normal sharpening is 0-20, the default 10. There are 4 additional
# settings, 21-24 that provide 'overboost' sharpening. Overboost sharpening is only suitable typically for high definition, high quality sources.
# Actual sharpening calculation is scaled based on resolution.

# +++ ReNoise +++
# ReNoise adds back some of the removed luma noise. Re-adding original noise would be counterproductive, therefore ReNoise modifies this noise
# both spatially and temporally. The result of this modification is the noise becomes much nicer and it's impact on compressibility is greatly
# reduced. It is not applied on areas where the sharpening occurs as that would be counterproductive. Settings range from 1 to 20, default
# value is 14. The strength of renoise is affected by the the amount of original noise removed and how this noise varies between frames. It's
# main purpose is to reduce the 'flatness' that occurs with any form of effective denoising.

# +++ Deband +++
# This will perceptibly improve the quality of the image by reducing banding effect and adding a small amount of temporally stabilised grain to
# both luma and chroma. The settings are not adjustable as the default settings are suitable for most cases without having a large effect on
# compressibility. Auto balance uses Autoadjust, it calculates statistics of the clip, stabilises temporally and adjusts luminance gain & colour
# balance of the noise reduced clip.
# 0=disabled, 1=deband only, 2=auto balance only, 3=both deband and auto balance, 4=deband and veed, 5=all

# +++ Depth +++
# This applies a modified warp sharpening on the image that may be useful for certain things, and can improve the perception of image depth. Default
# is 0 (disabled), and ranges up to 5. This function will distort the image, for animation a setting of 1 or 2 can be beneficial to improve lines. The
# effect

# +++ Strength +++
# The strength of the denoising effect can be adjusted using this parameter. It ranges from 20 percent denoising effect with strength 1, up to the
# 100 percent of the denoising with strength 20 (default). This function works by blending a scaled percentage of the original image with the processed
# image.

# +++ Outbits +++
# Specifies the bits per component (bpc) for the output for processing by additional filters. It will also be the bpc that mClean will process.
# By default, mClean processes as 12 bits if the input is 8 bit, and converts back to 8 bit. If the input is 10 bits or higher no conversion is
# done unless outbits is specified and is different to the input bpc. If you output at a higher bpc keep in mind that there may be limitations
# to what subsequent filters and the encoder may support.

#  +++ Required plugins +++
# Latest RGTools, MVTools2, Masktools2, f3kdb, Modplus, AutoAdjust
# Refer to https://forum.doom9.org/showpost.php?p=1834698&postcount=334

function mClean(clip c, int "thSAD", bool "chroma", int "sharp", int "rn", int "deband", int "depth", float "strength", int "outbits")
{

    defH        = Max (C.Height, C.Width/4*3)   # Resolution calculation for auto blksize settings
    thSAD       = Default (thSAD, 400)   # Denoising threshold
    chroma      = Default (chroma, true)   # Process chroma
    sharp       = Default (sharp, 10)   # Sharp multiplier
    rn          = Default (rn, 14)   # Luma ReNoise strength from 0 (disabled) to 20
    deband      = Default (deband, 4)   # Apply deband/veed and/or auto balance
    depth       = Default (depth, 0)   # Depth enhancement
    strength    = Default (strength, 20)   # Strength of denoising.
    outbits     = Default (outbits, BitsPerComponent(c))   # Output bits, default input depth
    calcbits    = BitsPerComponent(c) == 8 ? 12 : outbits

    Assert(isYUV(c)==true, """mClean: Supports only YUV formats (YV12, YV16, YV24)""")
    Assert(isYUY2(c)==false, """mClean: Supports only YUV formats (YV12, YV16, YV24)""")
    Assert(isYV411(c)==false, """mClean: Supports only YUV formats (YV12, YV16, YV24)""")
    Assert(sharp>=0 && sharp<=24, """mClean: "sharp" ranges from 0 to 24""")
    Assert(rn>=0 && rn<=20, """mClean: "rn" ranges from 0 to 20""")
    Assert(deband>=0 && deband<=5, """mClean: deband options 0 (disabled) to 5. Refer to description""")
    Assert(depth>=0 && depth<=5, """mClean: depth ranges from 0 (disabled) to 5""")
    Assert(strength>0 && depth<=20, """mClean: strength ranges from 1 (20%) to 20 (100%, default)""")
    Assert(outbits>=8 && outbits<=16, """mClean: "outbits" ranges from 8 to 16""")

padX       =  c.width%8 == 0 ? 0 : (16 - c.width%8)
padY       =  c.height%8 == 0 ? 0 : (16 - c.height%8)
c          =  padX+padY<>0 ? c.addborders(0, 0, padX, padY) : c
cy         =  ExtractY(c)
sc         =  defH>2800 ? 8 : defH>1400 ? 4 : defH>720 ? 2 : 1
blksize    =  sc==8 ? 8 : ((defH/sc)/360)>1.5 ? 16 : ((defH/sc)/360)>0.8 ? 12 : 8
overlap    =  blksize>=12 ? 6 : 2
lambda     =  775*(blksize*blksize)/64
sharp      =  sharp>20 ? sharp+30 : DefH<=2600 ? 16+round(defH*(34/2600)*sharp/20) : 50
depth      =  depth*2
depth2     =  -(depth+(depth/2))


# Denoise preparation
c           =  chroma ? Median (c, yy=false, uu=true, vv=true) : c

# Temporal luma noise filter
fvec1       =  bitspercomponent(c)>8 ? convertbits(c, 8) : undefined()
bvec1       =  bitspercomponent(cy)>8 ? convertbits(cy, 8) : undefined()
super       =  MSuper (BicubicResize(chroma ? defined(fvec1) ? fvec1 : c : defined(bvec1) ? bvec1 : cy, c.Width/sc, c.Height/sc),
            \  hpad=16/sc, vpad=16/sc, rfilter=4)
super2      =  MSuper (chroma ? defined(fvec1) ? fvec1 : c : defined(bvec1) ? bvec1 : cy, hpad=16, vpad=16, levels=1)

# --> Analysis
bvec4       =  MRecalculate(super2, MscaleVect (MAnalyse (super, isb = true, delta = 4, blksize=blksize, overlap=overlap), sc),
            \  blksize=blksize, overlap=overlap, lambda=lambda, thSAD=180)
bvec3       =  MRecalculate(super2, MscaleVect (MAnalyse (super, isb = true, delta = 3, blksize=blksize, overlap=overlap), sc),
            \  blksize=blksize, overlap=overlap, lambda=lambda, thSAD=180)
bvec2       =  MRecalculate(super2, MscaleVect (MAnalyse (super, isb = true, delta = 2, blksize=blksize, overlap=overlap,
            \  badSAD=1100, lsad=1120), sc), searchparam=3, blksize=blksize, overlap=overlap, lambda=lambda, thSAD=180)
bvec1       =  MRecalculate(super2, MscaleVect (MAnalyse (super, isb = true, delta = 1, blksize=blksize, overlap=overlap, badSAD=1500, badrange=27,
            \  search=5, lsad=980), sc), blksize=blksize, overlap=overlap, search=5, searchparam=3, lambda=lambda, thSAD=180)
fvec1       =  MRecalculate(super2, MscaleVect (MAnalyse (super, isb = false, delta = 1, blksize=blksize, overlap=overlap, badSAD=1500, badrange=27,
            \  search=5, lsad=980), sc), blksize=blksize, overlap=overlap, search=5, searchparam=3, lambda=lambda, thSAD=180)
fvec2       =  MRecalculate(super2, MscaleVect (MAnalyse (super, isb = false, delta = 2, blksize=blksize, overlap=overlap,
            \  badSAD=1100, lsad=1120), sc), searchparam=3, blksize=blksize, overlap=overlap, lambda=lambda, thSAD=180)
fvec3       =  MRecalculate(super2, MscaleVect (MAnalyse (super, isb = false, delta = 3, blksize=blksize, overlap=overlap), sc),
            \  blksize=blksize, overlap=overlap, lambda=lambda, thSAD=180)
fvec4       =  MRecalculate(super2, MscaleVect (MAnalyse (super, isb = false, delta = 4, blksize=blksize, overlap=overlap), sc),
            \  blksize=blksize, overlap=overlap, lambda=lambda, thSAD=180)

# --> Bit depth conversion
c           =  chroma ? calcbits != BitsPerComponent(c) ? ConvertBits(c, calcbits) : c : c
super2      =  calcbits != BitsPerComponent(super2) ? ConvertBits(super2, calcbits) : super2
cy          =  calcbits != BitsPerComponent(cy) ? ConvertBits(cy, calcbits) : cy

# --> Applying cleaning
clean       =  MDegrain4(chroma ? c : cy, super2, bvec1, fvec1, bvec2, fvec2, bvec3, fvec3, bvec4, fvec4, thSAD=thSAD)
u           =  chroma ? ExtractU(clean) : nop ()
v           =  chroma ? ExtractV(clean) : nop ()
filt_chroma =  chroma ? CombinePlanes(c, mt_adddiff(u, clense(mt_makediff(ExtractU(c), u), reduceflicker=true)), mt_adddiff(v,
            \  clense(mt_makediff(ExtractV(c), v), reduceflicker=true)), planes="yuv", source_planes="yyy", sample_clip=c) : c
clean       =  chroma ? ExtractY(clean) : clean

# Post clean, pre-process deband
filt_chroma_bits =  BitsPerComponent(filt_chroma)
clean2           =  deband==0 ? nop() : ConvertBits(clean, 8)
noise_diff       =  deband==0 ? nop() : BitsPerComponent(c)==8 ? nop() : mt_makediff(convertbits(clean2, calcbits), clean)
depth_calc       =  deband==0 ? nop() : CombinePlanes (clean2, filt_chroma_bits>8 ? ConvertBits(filt_chroma, 8) : filt_chroma, planes="YUV",
                 \  source_planes="YUV", pixel_type="YV12")
depth_calc       =  deband==0 ? nop() : deband>1 ? deband==4 ? depth_calc : AutoAdjust (depth_calc, auto_gain=true, bright_limit=1.09, dark_limit=1.11,
                 \  gamma_limit=1.045, auto_balance=true, chroma_limit=1.13, chroma_process=115, balance_str=0.85) : depth_calc
depth_calc       =  deband==0 ? undefined() : deband<>2 ? f3kdb (depth_calc, preset=chroma?"high":"luma", range=16, grainY=38*(defH/540),
                 \  grainC=chroma?37*(defH/540):0) :depth_calc
clean            =  deband==0 ? clean : BitsPerComponent(c)==8 ? ExtractY (depth_calc) : mt_adddiff(ConvertBits(ExtractY
                 \  (depth_calc), calcbits), noise_diff)
depth_calc       =  deband==0 ? nop() : BitsPerComponent(depth_calc)<>filt_chroma_bits ? ConvertBits(depth_calc, filt_chroma_bits) : depth_calc
filt_chroma      =  deband==0 ? filt_chroma : deband>4 ? veed(depth_calc) : depth_calc

# Spatial luma denoising
clean2      =  removegrain(clean, 18)

# Unsharp filter for spatial detail enhancement
clsharp     =  sharp>0 ? sharp>=51<=54 ? mt_makediff(clean, gblur(clean2, (sharp-50), sd=3)) :
            \  mt_makediff(clean, blur(clean2, 1.58*(0.03+(0.97/50)*sharp))) : nop()
clsharp     =  mt_adddiff(clean2, repair(clense(clsharp), clsharp, 12))

# If selected, combining ReNoise
noise_diff  =  mt_makediff (clean2, cy)
clean2      =  rn>0<=20 ? mt_merge(clean2, mergeluma (clean2, mt_adddiff(clean2, tweak(clense(noise_diff, reduceflicker=true), cont=1.008+(0.0032*(rn/20)))),
            \  0.3+(rn*0.035)), mt_lut (overlay(clean, invert(clean), mode="darken"), "x 32 scaleb < 0 x 45 scaleb > range_max 0 x 35 scaleb - range_max 32
            \  scaleb 65 scaleb - / * - ? ?")) : clean2

# Combining spatial detail enhancement with spatial noise reduction using prepared mask
noise_diff  =  mt_invert(mt_binarize(noise_diff))
clean2      =  sharp>0 ? mt_merge (clean2, clsharp, overlay(noise_diff, mt_edge(clean, "prewitt"), mode="lighten")) :
            \  mt_merge (clean2, clean, overlay(noise_diff, mt_edge(clean, "prewitt"), mode="lighten"))

# Converting bits per channel and luma format
filt_chroma =  outbits < BitsPerComponent(filt_chroma) ? ConvertBits(filt_chroma, outbits, dither=1) : ConvertBits(filt_chroma, outbits)
clean2      =  outbits < BitsPerComponent(clean2) ? ConvertBits(clean2, outbits, dither=1) : ConvertBits(clean2, outbits)
c           =  BitsPerComponent(c) <> BitsPerComponent(clean2) ? ConvertBits(c, BitsPerComponent(clean2)) : c

# Combining result of luma and chroma cleaning
output      =  CombinePlanes(clean2, filt_chroma, planes="YUV", source_planes="YUV", sample_clip=c)
output      =  strength<20 ? Merge(c, output, 0.2+(0.04*strength)) : output
depth_calc  =  depth>0 ? defh>640 ? bicubicresize(output, 720, 480) : output : nop()
output      =  depth>0 ? mt_adddiff(output, spline36resize(mt_makediff(awarpsharp2(depth_calc, depth=depth2, blur=3),
            \  awarpsharp2(depth_calc, depth=depth, blur=2)), output.width, output.height)) : output
output      =  padX+padY<>0 ? output.crop(0, 0, -padX, -padY) : output

return output
}

Last edited by burfadel; 1st March 2018 at 10:18. Reason: Updated - 3.2 - 01 March 2018
burfadel is offline   Reply With Quote
Old 15th August 2017, 05:27   #2  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,173
Besides smart block size selection, is this different than the first version?

As I said, I'd be very interested in a script that includes optional correction of other types of defects. I don't know what order of execution gives best results.

Here's a question for you.

How would you define, in technical terms:
- noise
- ringing
- blocking
- banding

... and where do you draw the line between what is and isn't each of the above?
MysteryX is offline   Reply With Quote
Old 15th August 2017, 13:31   #3  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,134
Quote:
Originally Posted by MysteryX View Post
How would you define, in technical terms:
- noise
- ringing
- blocking
- banding

... and where do you draw the line between what is and isn't each of the above?
noise: any unwanted components in the signal, concretely, noise is generally modeled as a random signal that follows Gaussian distribution in most denosing algorithms, this random signal could be canceled out by various approaches, bilateral assumes that pixel-wise weighted averaging could cancel out the signal, DFT assumes that if you extract a piece of pattern from the image, you will notice something fishy in that pattern and it's an intra-pattern based approach, pixel and block matching assume that noise could be canceled by averaging similar patterns, it's an inter-pattern based approach

ringing: https://en.wikipedia.org/wiki/Gibbs_phenomenon

blocking: has definitely nothing to do with low-bitrate, the real reason it happens with some obsolete codecs is that, macroblocks in those codecs do not share any overlap

banding: lack of quantizing precision, 8bit sucks, it won't happen if the entire process chain has a higher precision, say, 32bit float
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 15th August 2017, 06:21   #4  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,234
Changes:

v3.2
- small update to to make use of changes in MvTools2 2.7.25 *** Please update dependencies ****
- analysis will always be done in 8 bit regardless of input depth. This will give a small speed bump with no quality loss for videos with input bitdepth greater than 8
- fixed deband=0 issue

v3.1
- minor tweaks to analysis
- minor tweaks to luma and chroma renoise

v3.0
- considerable amount changed:
- reverted to MDegrain chroma denoising with different handling of chroma
- heavily revised luma cleaning
- sharp scales to resolution based on the multiplier (now 0-24)
- renoise tweaked
- new masks
- remove cstr setting
- added 'depth' function (post-processing)
- added strength function; remixes a portion of the original image back with the processed imaged, scaled from 20 percent to 100 percent at strength 20 (default)
- no longer requires FFT3DFilter or fftw dependency
- requires modplus for chroma processing (in addition to existing features) http://www.avisynth.nl/users/vcmohan...s/modPlus.html
- some other tweaks and changes

v2.3
- modified deband features so that luma bit depth difference is only applied when source material is greater than 8 bit
- resolved minor artifacts that were occasionally produced when there were high contrast differences and flat surfaces
- repurposed dctfilter

v2.2
- corrected an oversight regarding chroma processing of deband features, it was applied to luma but not to chroma
- resolved an associated script bug that didn't affect anything since chroma deband wasn't applied
- luma bit depth detail now retained even when using deband features

v2.1
- processing of denoising now undertaken in 12 bits (or whatever is specified for outbits if greater than 8), analysis still processed at source depth.
- added the use of veed by VCMohan. Deband options 4 is now default (deband+veed), deband option 5 is to use deband, veed, and level adjustment (autoadjust)
- modified renoise
- slight adjustment to the motion mask
- made changes to the temporal stabilisation of renoise and sharpening (and made a correction to the sharpening stabilisation)
- added non-8 bit workaround for modern high bit depth incompatible filters (deband from f3kdb and autoadjust)
- tweaked several parameters
- resolved bit depth issue when using different combinations of input depths and outbits

v2.0
- improved debanding feature, now ranges 0-3. 0=disabled, 1=deband (default), 2=levels/saturation auto balance adjustment only, 3=both
- levels/saturation is automatically adjusted using the Autolevels plugin, only required if manually enabled - https://forum.doom9.org/showthread.php?t=167573
- changed sharpening setting from 'enh' to 'sharp' to better distinguish what it is. 'enh' will be used later for another name appropriate feature
- refractored sharpening, it now increases a little less with higher resolutions
- added chroma renoise when chroma is enabled (default), non-adjustable
- fixed issue with block sizes on higher resolutions
- fixed issue with the passmask used as part of processing; appears Masktool2 may have a bug with the value scaling feature
- tweaked noise processing parameters

v1.9
- added option to disable chroma processing, default is to process chroma
- added an option to change the strength of chroma processing
- added debanding, default is enabled
- adjusted blocksize parameters
- tweaked some other settings

v1.8
- speed increase and reduced memory use for all but the lowest resolutions
- improved quality
- removed cpu option for FFT3DFilter, as any more than 4 threads proves no faster an for high thread counts, appears to run slower

v1.7c
- slight adjustments and slightly better speed

v1.7b
- modified analysis for MDegrain for performance and quality

Last edited by burfadel; 1st March 2018 at 10:18. Reason: Added v3.2 changelog info
burfadel is offline   Reply With Quote
Old 15th August 2017, 07:14   #5  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,173
mt_lut evaluates an expression on pixels. It basically allows implementing algorithms without having to write a DLL nor write assembly code. It uses a LUT table for optimization, but that doesn't work for 16-bit videos. Because noise reduction algorithms deal with subtleties and then affect the rest of the script, I'd recommend running it in 16-bit, and mt_lut then isn't a good option. If you need custom algorithms, creating a DLL is always a good option, like I did with FrameRateConverter to detect stripe patterns.

Banding is not related to blocking. Banding is due to rounding where each value appears as a distinct band. To avoid banding, we normally use dithering. No dithering leads to banding.
MysteryX is offline   Reply With Quote
Old 15th August 2017, 12:08   #6  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,134
Quote:
Originally Posted by MysteryX View Post
mt_lut evaluates an expression on pixels. It basically allows implementing algorithms without having to write a DLL nor write assembly code. It uses a LUT table for optimization, but that doesn't work for 16-bit videos. Because noise reduction algorithms deal with subtleties and then affect the rest of the script, I'd recommend running it in 16-bit, and mt_lut then isn't a good option. If you need custom algorithms, creating a DLL is always a good option, like I did with FrameRateConverter to detect stripe patterns.

Banding is not related to blocking. Banding is due to rounding where each value appears as a distinct band. To avoid banding, we normally use dithering. No dithering leads to banding.
no, pixel-wise evaluations are just literally, "pixel-wise", u got no access to the neighbor pixels and that renders it much less useful than a dynamic library

a Gaussian blur with a radius of 1 is simply like
Code:
dstp[y][x] = (srcp[y-1][x-1] + 2 * srcp[y-1][x] + srcp[y-1][x+1] + 2 * srcp[y][x-1] + 4 * srcp[y][x] + 2 * srcp[y][x+1] + srcp[y+1][x-1] + 2 * srcp[y+1][x] + srcp[y+1][x+1]) / (1 + 2 + 1 + 2 + 4 + 2 + 1 + 2 + 1);
for a c++ plugin

now how is that gonna work for ur fancy LUT or whatever?

well, another fun fact is that it's actually possible in vaporsynth with Expr
Code:
topleft = core.std.AddBorders(core.std.CropRel(clp, 0, 1, 0, 1), 1, 0, 1, 0)
topcenter = ...
topright = ...
adjacentleft = ...
center = clp
adjacentright = ...
bottomleft = ...
bottomcenter = ...
bottomright = ...

clp = core.std.Expr([topleft, topcenter, topright, adjacentleft, center, adjacentright, bottomleft, bottomcenter, bottomright],
"x y 2 * + z + a 2 * + b 4 * + c 2 * + d + e 2 * + f + 1 2 + 1 + 2 + 4 + 2 + 1 + 2 + 1 + /")
ain't that pretty, eh? that's why it's only possible but not practical

and I'm damn sure it's not even possible in avisynth
edit:
or maybe possible with y8rpn, but you see the point, it's nasty
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated

Last edited by feisty2; 15th August 2017 at 12:19.
feisty2 is offline   Reply With Quote
Old 15th August 2017, 12:58   #7  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,134
Quote:
Originally Posted by WolframRhodium View Post
why not use mt_convolution() / mt_luts() / core.std.Convolution()?
mt_lut is a pixel-wise evaluator, std.Convolution is NOT
the toy in vaporsynth corresponding to mt_lut(xyz) should be std.Expr (function-wise, they do things differently tho)
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 15th August 2017, 13:05   #8  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,134
Quote:
Originally Posted by WolframRhodium View Post
Your code is the same as
Code:
core.std.Convolution(matrix=[1, 2, 1, 2, 4, 2, 1, 2, 1])
or simply
Code:
core.rgvs.RemoveGrain(11)
right?

So why you write such complicated code?
to show that a pixel-wise evaluator is far from enough to code any sophisticated algorithm

std.Convolution is NOT A PIXEL-WISE EVALUATOR, it is NOT CORRESPONDING TO mt_lut, stop distracting me

the point is there're limitations for pixel-wise evaluators, not how to perform a Gaussian blur quick and fast, that Gaussian blur thing is just a demonstration of the point
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated

Last edited by feisty2; 15th August 2017 at 13:11.
feisty2 is offline   Reply With Quote
Old 15th August 2017, 13:14   #9  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,134
Quote:
Originally Posted by WolframRhodium View Post
I never said that. Maybe I should stop speaking because this is an avs thread and it seems that we both misunderstand each other.
Quote:
the point is there're limitations for pixel-wise evaluators, not how to perform a Gaussian blur quick and fast, that Gaussian blur thing is just a demonstration of the point
did you even read my previous posts?
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 15th August 2017, 08:52   #10  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,234
Original third post:
The basic noise filtering is similar to the original script, although I did make a small mistake that limited part of the effectiveness of that original script. The difference is what this version of the script does with the outputs of the different filtering. It could also potentially allow for deringing and dehalo reutilising some of the calculations, and this to some extent can be done for deblocking as well, but I'd have to work out the best way of making it effective. Banding is the hard one though, there would be probably no benefit to include that in the script over running a separate filter.

I would describe temporal noise as small variations between each frame on the scale of a few pixels. Spatial noise is small variations on the scale of a few pixels compared to adjacent pixels, that doesn't change too much between frames. This is much harder to remove without affecting actual detail, because it's a math based solution, not an perceptual based where we look at it and determine that it shouldn't be there. Temporal denoising is therefore IMO potentially much more useful out of the two, historially though temporal denoising was considerably slower and not practical. A small amount of spatial denoising I think can be beneficial though, if you can work out how it should be applied.

Ringing is the small 'ring like' artifacts typically next to areas of large contrast difference, caused by resizing or compression. Halo's are a brightening of the edge, typically outside edge due to contrast changes of an object that may be present. This can be the result of oversharpening. Blocking is the visible edges of a block typically caused by not enough bandwidth or the use of an inefficient (by modern standard) codec. In effect it is large scale pixellation. Banding is related to blocking, it's the visualisation of the block boundaries on a flat area that has gradient, again caused by encoder or transfer inefficiencies.

No promises on a timeframe for the ringing, halo, deblocking filter, or whether it can be done related to a concept I have in mind. I think for best results I might have to use mt_lut functions, and to be honest I don't know how to use that. The documentation for masktools is a little lacking regarding most of its power features. There's a function called mt_gradient(), but not sure whether that actually does what it sounds like... and again, no idea how to use it!


-------------------

Ah ok. Makes sense about the banding, I thought that rounding occurs on a per block case causing the edges of the blocks to become pronounced over flat areas that have a gradient. Does the banding occur mid block? As for blocking, do you know if DCTFilter could be used for that, to ascertain block boundaries etc?

Updated version here:
https://github.com/chikuzen/DCTFilter/releases

I do have an idea for deringing and dehalo, however it would have to wait until the weekend to even contemplate sitting down and nutting it out . I'd set it to enable it as an option, likewise with any deblocking.

Last edited by burfadel; 29th October 2017 at 01:16.
burfadel is offline   Reply With Quote
Old 15th August 2017, 20:24   #11  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,173
Quote:
Originally Posted by Mounir View Post
that's what i get, any idea?
i can't find the plugin veed anywhere
nevermind, i found modplus(which contain veed i think)

now i get:
manalyse blocks must be 4x4, 8x4, 16x2 blabla
Support for additional block sizes was added in one of Pinterf's latest version of MvTools2.

Quote:
Originally Posted by feisty2 View Post
no, pixel-wise evaluations are just literally, "pixel-wise", u got no access to the neighbor pixels and that renders it much less useful than a dynamic library
I didn't say you could write C++ plugins with mt_lut. I said that anything you write with mt_lut can be written as a plugin.

Quote:
Originally Posted by burfadel View Post
Ah ok. Makes sense about the banding, I thought that rounding occurs on a per block case causing the edges of the blocks to become pronounced over flat areas that have a gradient. Does the banding occur mid block? As for blocking, do you know if DCTFilter could be used for that, to ascertain block boundaries etc?
I just realized banding mostly occurs for dark and bright scenes because of the 2.2 gamma curve. To preserve details, you would only work on Luma and only on value above/below a certain threshold where the difference between adjacent values is visible to the naked eye, leaving all mid-range values intact.

Banding happens only in specific scenarios:
- dark or bright scenes
- flat areas

If I was to implement a debander, I'd scan Luma horizontally line by line for flat areas that degrade by 1 or 2, and mark the division point between 2 flat areas of adjacent values, and mark the flat areas themselves. Repeat vertically.

Then I'd transform that patterns grid to detect significant zones, discarding detection on single lines. Similar to what I did with stripes detection.

Then, I could apply blurring/dithering/something to soften these edges. Since we're talking about flat Luma areas, there's not really any loss of data. In terms of order of execution, this should happen after denoising.

How does this compare to other debanding methods?


So what are the recommended plugins for each type of defect? In which order should they be run?
- Denoise: MClean is doing good so far
- Dering: I got best results with HQDeringmod.avsi (very complex script)
- Deblock: DCTFilter
- Deband: ?

Last edited by MysteryX; 15th August 2017 at 20:35.
MysteryX is offline   Reply With Quote
Old 15th August 2017, 12:49   #12  |  Link
Mounir
Registered User
 
Join Date: Nov 2006
Posts: 718
Quote:
there is no function named veed
that's what i get, any idea?
i can't find the plugin veed anywhere
nevermind, i found modplus(which contain veed i think)

now i get:
manalyse blocks must be 4x4, 8x4, 16x2 blabla

Last edited by Mounir; 15th August 2017 at 12:56.
Mounir is offline   Reply With Quote
Old 15th August 2017, 14:42   #13  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,234
Quote:
Originally Posted by Mounir View Post
that's what i get, any idea?
i can't find the plugin veed anywhere
nevermind, i found modplus(which contain veed i think)

now i get:
manalyse blocks must be 4x4, 8x4, 16x2 blabla
What version of Avisynth and MVTools are you using? The script was written under AviSynth+ r2508, and the latest Pinterf's updated MVtools and Masktools. Are you using any custom options for blocksize? The auto blocksize calculation was from Mysteryx's Framerateconverter.

Last edited by burfadel; 15th August 2017 at 14:47.
burfadel is offline   Reply With Quote
Old 15th August 2017, 21:01   #14  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,173
I just tried your script. First version was good. Perhaps it was a lucky shot


This version is too sharp for me.


Other version you made me try was too blurry.
MysteryX is offline   Reply With Quote
Old 16th August 2017, 02:26   #15  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,198
I tried it, got the same "veed" error as everyone else, then downloaded ModPlus, but the script crashed right away with the error message: "An out-of-bounds memory access (access violation) occurred in module 'fftw3'...reading address FFFFFFFF."

So, no go here.

Even it I could get it to work, it looks to me like most of the noise reduction is simply using MDegrain in this line from the script:
Code:
clean     =    c.MDegrain2 (super, bvec1, fvec1, bvec2, fvec2, thSAD=thSAD, plane = 0)
There is also some selective (via mask) sharpening which may, or may not, be a good thing.

So, while I read what you said about your objectives, I am not sure you have created anything that is much different from what already existed.

Last edited by johnmeyer; 16th August 2017 at 02:35. Reason: added error message
johnmeyer is offline   Reply With Quote
Old 16th August 2017, 03:35   #16  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,173
Here's the original version that I got good results with. Less loss of details than with KNLMeansCL.
Code:
# MClean basic script
# Mask from bennynihon https://forum.doom9.org/showthread.php?p=1689444#post1689444
# Remaining script by burfadel altered from generic information
# Basics for this script is to remove grain whilst retaining as much information as possible
# The script should also be relatively fast, even without Masktools2 multithreading (disabled due to possible MT bug)
# Chroma is processed via a different method to luma for optimal results
# Requires RGTools, Modplus (Veed, for part of chroma filter), MVTools2, Masktools2, FFT3DFilter
 

function MClean(clip c, int "thSAD", int "blksize", int "blksizeV", int "overlap", int "overlapV", int "cblksize", int "cblksizeV", int "coverlap", int "coverlapV", int "cpu")
{
thSAD     = Default(thSAD, 350) # Denoising threshold
blksize   = Default(blksize, 16) # Horizontal block size for luma
blksizeV  = Default(blksizeV, blksize) # Vertical block size for luma, default same as horizontal
overlap   = Default(overlap, 4) # Block overlap
overlapV  = Default(overlapV, overlap) # Overlap for vertical luma blocks, default same as horizontal

cblksize  = Default(cblksize, 16) # Horizontal block size for chroma
cblksizeV = Default(cblksizeV, cblksize) # Vertical block size for chroma, default same as horizontal
coverlap = Default(coverlap, cblksize/4) # Overlap for horizontal chroma blocks, default quarter cblksize
coverlapV = Default(coverlapV, cblksizeV/4) # Overlap for vertical chroma blocks, default quarter cblksizeV
cpu       = Default(cpu, 4) # Threads for FFT3DFilter


# Masks
LumaMask=mt_binarize(c, threshold=64, upper=true).greyscale().BilinearResize((c.width/16)*2, (c.height/16)*2).BilinearResize(c.width,c.height).mt_binarize(threshold=254)
EdgeMask=mt_edge(c, mode="prewitt",thy1=0,thy2=16).greyscale().mt_binarize(threshold=16, upper=true).BilinearResize((c.width/16)*2, (c.height/16)*2).BilinearResize(c.width,c.height).mt_binarize(threshold=254)
GrainMask=mt_logic(LumaMask,EdgeMask,mode="and")
DegrainMask=GrainMask.mt_invert()

# Chroma filter
filt_chroma=fft3dfilter(veed(c), plane=3, bw=cblksize, bh=cblksizeV, ow=coverlap, oh=coverlapV, bt=5, sharpen=0.5, ncpu=cpu, dehalo=0.2, sigma=2.35)

# Luma Filter
super = c.MSuper(rfilter=4, chroma=false,hpad=16, vpad=16)
bvec2 = MAnalyse(super, chroma=false, isb = true, delta = 2, blksize=blksize, blksizeV=blksizeV, overlap=overlap, overlapV=overlapV, search=5, searchparam=7)
bvec1 = MAnalyse(super, chroma=false, isb = true, delta = 1, blksize=blksize, blksizeV=blksizeV, overlap=overlap, overlapV=overlapV, search=5, searchparam=4)
fvec1 = MAnalyse(super, chroma=false, isb = false, delta = 1, blksize=blksize, blksizeV=blksizeV, overlap=overlap, overlapV=overlapV, search=5, searchparam=4)
fvec2 = MAnalyse(super, chroma=false, isb = false, delta = 2, blksize=blksize, blksizeV=blksizeV, overlap=overlap, overlapV=overlapV, search=5, searchparam=7)
Clean = c.MDegrain2(super, bvec1, fvec1, bvec2, fvec2, thSAD=thSAD, plane = 0)

#Luma mask merge
filt_luma = c.mt_merge(Clean, DegrainMask, U=1, V=1)

# Combining result of luma and chroma cleaning
output = mergechroma(filt_luma,filt_chroma)

return output
}
Now that I look at it though, it looks like SMDegrain, except that it instead uses FF3DFilter for chroma.
MysteryX is offline   Reply With Quote
Old 16th August 2017, 06:40   #17  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,234
Inevitably all scripts will look similar, the difference is how the results are treated afterwards. Mdegrain is purely temporal, I have detail independent spatial noise reduction added, as well as noise independent sharpening to recover detail, both temporally stabilised. A type of derainbow function can easily be implemented with very little performance cost as an option, once ideal preset settings are worked out. Deringing and dehalo could also be implemented utiising existing masks

@MysteryX, I'll add an adjustment function for the detail sharpening strength .

Do people find the sharpening too strong for other sources? I'll reduce the defaults to make the sharpening more neutral.

I'll remove veed seeing as it's an Avisynth+ only filter, the Avisynth version is deveed. These can still be run separately. I'll also update the info regarding the need for fftw.
burfadel is offline   Reply With Quote
Old 16th August 2017, 09:03   #18  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,173
There's nothing I hate more than over-sharpened videos. Much better when it is sharp but neutral. It's often a fine line though. Also, sharpening amplifies noise and artifacts.
MysteryX is offline   Reply With Quote
Old 16th August 2017, 11:25   #19  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,234
Quote:
Originally Posted by MysteryX View Post
There's nothing I hate more than over-sharpened videos. Much better when it is sharp but neutral. It's often a fine line though. Also, sharpening amplifies noise and artifacts.
That's true. That sharpening shouldn't really affect noise, however it could make some forms of artifacts stand out more for now until that part is sorted out . At the moment the script almost entirely focuses on noise removal with the intention not to remove detail. I'll ease back the sharpening by half, and have it adjustable with a parameter scaled from probably 1/10 of what it is now to a bit more at 100, and have the default set at say, 40. I'll take a look at it shortly and update the first post script .
burfadel is offline   Reply With Quote
Old 16th August 2017, 11:47   #20  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,134
sharpening is not how u gonna magically resurrect the lost details, especially a cheap USM like that
I say you'd better off try some fancy new toys like denoising autoencoder and see how it goes
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Reply

Tags
cleaning, denoise, denoiser, mclean

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 13:14.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.