Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 23rd September 2018, 01:33   #41  |  Link
Groucho2004
 
Groucho2004's Avatar
 
Join Date: Mar 2006
Location: A wretched hive of scum and villainy
Posts: 4,357
Quote:
Originally Posted by StainlessS View Post
q) Is the registry always automatically cleared, upon uninstall ? (I doubt it)
I'm pretty sure that this is the case for all current installers (AVS 2.6, AVS+). The Universal Installer also cleans up properly after uninstall.

If the registry entry is just an orphan and avisynth.dll is not present, the application loading Avisynth will (well, should) throw an appropriate error.
__________________
Groucho's Avisynth Stuff

Last edited by Groucho2004; 23rd September 2018 at 09:25.
Groucho2004 is offline   Reply With Quote
Old 23rd September 2018, 02:40   #42  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 6,949
OK, was just going by the fact that many other apps leave their keys insitu on uninstall.
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???
StainlessS is offline   Reply With Quote
Old 23rd September 2018, 09:10   #43  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Hollola, Finland
Posts: 4,562
Quote:
Originally Posted by zorr View Post
That certainly could be done. The only difficulty in this would be that Avisynth would have to simulate the upscaling of the playback device (or is it upscaled with Avisynth during the playback?).
It would be simplest to upscale with Avisynth during playback, maybe just use a common LanczosResize or Spline to make sure the upscale remains as sharp as possible without introducing other artifacts.

Quote:
I haven't looked at Vapoursynth so I can't tell for sure. But basically what's needed for the optimizer to work is:
1) a way to run a script programmatically from the optimizer
2) a way to specify which values the optimizer is changing in the script
3) the script has to calculate a result: quality / runtime / any other interesting value
4) the script has to write the result into a specific file

I'm very much interested in making this happen but hopefully someone who knows Vapoursynth can help me figure out the steps above.
Myrsloik, the author of Vapoursynth, is very knowledgeable so I'll post in the Vapoursynth thread to point here. At least point 1 is there out of the box, vspipe can be used to pipe the output of the script and it's commonly used with encoders. The others should also be available as VS is Python-based and thus quite flexible in itself.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 23rd September 2018, 23:29   #44  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 213
I wanted to get rid of the polling so instead I tried waiting for the avsr process to finish until I start reading the output file. That works but is about 10% slower (when running a validation with a fast script) than the polling method.

Another thing I'm working on is improved error handling. The latest avsr will output the script error messages into standard error stream which I can read from the optimizer. However that same 10% penalty comes when I read avsr's output streams fully before I start checking the output file. That makes sense since reading the streams until nothing else is coming is pretty much the same thing as waiting for the process to finish. So what I'm going to try next is reading the streams while I'm polling the output file. In that way I can hopefully enable the improved error handling and keep the faster execution speed.

Apologies for the slow progress. I recently bought a house and now I have to plan the moving and renovations...

Last edited by zorr; 23rd September 2018 at 23:31. Reason: wrong word used
zorr is offline   Reply With Quote
Old 24th September 2018, 13:11   #45  |  Link
Myrsloik
Professional Code Monkey
 
Myrsloik's Avatar
 
Join Date: Jun 2003
Location: Ikea Chair
Posts: 1,956
Quote:
Originally Posted by zorr View Post
That certainly could be done. The only difficulty in this would be that Avisynth would have to simulate the upscaling of the playback device (or is it upscaled with Avisynth during the playback?).



I haven't looked at Vapoursynth so I can't tell for sure. But basically what's needed for the optimizer to work is:
1) a way to run a script programmatically from the optimizer
2) a way to specify which values the optimizer is changing in the script
3) the script has to calculate a result: quality / runtime / any other interesting value
4) the script has to write the result into a specific file

I'm very much interested in making this happen but hopefully someone who knows Vapoursynth can help me figure out the steps above.
1. Use vspipe if all you need is to calculate a frame statistic
2. vspipe has --arg key=value which can set variables inside the script
3. per frame or for the whole thing? not sure what you need here
4. depends on the answer in 3, I plan to add the possibility to dump all frame properties as json in a future vspipe release
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet
Myrsloik is offline   Reply With Quote
Old 25th September 2018, 21:31   #46  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 213
Quote:
Originally Posted by Myrsloik View Post
1. Use vspipe if all you need is to calculate a frame statistic
Ok, I'll take a look at vspipe. Is there a simple vapoursynth script example I could use to test it with?

Quote:
Originally Posted by Myrsloik View Post
2. vspipe has --arg key=value which can set variables inside the script
That sounds much more elegant than the way it's done with Avisynth. But I forgot to mention that usually you also need to define dependencies between variables (for example this value cannot be larger than that other value). Those definitions would still have to be inside the script. In Avisynth the definitions are inside comment blocks on the same line as the variable.

Quote:
Originally Posted by Myrsloik View Post
3. per frame or for the whole thing? not sure what you need here
Per frame is not necessary, just a nice feature (the per frame values are summed by the optimizer). But you can do the summing easily in the script and output a single final result. Most common operation is to calculate a SSIM similarity metric and/or the frame's runtime for each frame and sum those.

Quote:
Originally Posted by Myrsloik View Post
4. depends on the answer in 3, I plan to add the possibility to dump all frame properties as json in a future vspipe release
It doesn't need to happen within vspipe, the script could write the file. At least that's how it works with Avisynth.
zorr is offline   Reply With Quote
Old 25th September 2018, 23:14   #47  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 213
Quote:
Originally Posted by Groucho2004 View Post
Checking the registry is probably the best way. Here are the keys...
I considered this but reading the Windows registry from Java probably isn't worth the hassle (it requires external libraries). I think I will settle for a simple OS check - on a 32bit windows Avisynth is 32bit too. On a 64bit Windows I will ask the Avisynth architecture on the first run (or if information no longer present in .ini file). And the default value can be overridden with -arch parameter (thanks davidhorman for the suggestion).
zorr is offline   Reply With Quote
Old 29th September 2018, 22:43   #48  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 213
New version 0.9.2-beta is released. I have changed the download links to point to the new version.

This version no longer uses VirtualDub to run the scripts, instead it's using Groucho2004's excellent avsr utility which is included.

The error handling is improved and any script errors will be displayed on the console window.

I will have to change the tutorial to reflect these changes. Should I just edit the original messages or post a completely new version of the tutorial? If I change the original it will be easier for people who start reading from the beginning but it will be difficult to understand the discussion that follows.
zorr is offline   Reply With Quote
Old 29th September 2018, 23:43   #49  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 6,949
Just update original posts, and post advisory that is updated.
It is your thread to do with as you please, within reason.
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???
StainlessS is offline   Reply With Quote
Old 30th September 2018, 22:34   #50  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 213
I have updated the Hands-on tutorial to match the features of the latest AvisynthOptimizer version.
zorr is offline   Reply With Quote
Old 1st October 2018, 21:18   #51  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 213
Version 0.9.3-beta released. The only change is that avsr was upgraded to latest version 0.1.7.
zorr is offline   Reply With Quote
Old 2nd October 2018, 23:59   #52  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 213
Optimizer arguments

It's time to take a closer look at how to adjust the optimization process. Let's run the optimizer using the same script and settings used in the last tutorial:

Code:
optimizer <path_to_your_script> -iters 100
The program displays:

Code:
Arguments
  iters = 100

Running optimization for script d:/optimizer/test/flower/denoise.avs
Using these settings:
ARGUMENT      DESCRIPTION             VALUE
-runs         runs                    5
-alg          algorithm               spea2
-pop          population              8
-iters        iterations              100
-mutamount    mutation amount         0.3 0.01
-mutcount     mutation count          60% 1
-crossprob    crossover probability   0.1
-crossdist    crossover distribution  20
-sensitivity  sensitivity estimation  true
-dynphases    dynamic phases          N/A
-dyniters     iterations per phase    N/A
You can stop the optimization after this text is displayed.

The "Arguments" section lists the arguments and their values as they were understood by the optimizer.

The next section is a handy cheat sheet on what arguments are available and their current values. The first column ARGUMENT tells the argument name you can use to specify the setting. The DESCRIPTION column contains a short description of what the argument does. And finally the VALUE is the current value used for the argument. Most of these are using the default values, we only specified the -iters argument. If you run the optimizer in another mode (like "evaluate") the listed arguments are specific to that mode.

I spent quite a while figuring out good default values so they should work reasonably well, but I have only tested them on a few different optimization tasks so they might not be good for every case. It takes a lot of effort to test these settings because to determine if one value is better than another one should run the optimization task many times with each value in order to gain enough statistical significance. I mostly used 20 runs per parameter value.

Let's take a look at the arguments one by one.

-runs specifies the number of optimization runs. a "run" is one complete optimization cycle which itself is specified with the -iters argument. I talked about the need for multiple runs earlier but I will repeat the points here: Since the optimization process is depending on random numbers the outcome is not always the same and there can be large differences in the final result. If you only run the optimization once you cannot be really sure whether the results are good or bad. Another useful aspect of multiple runs is that the variance of the best result can tell us about how easy or hard this optimization task is. Large variance means difficult task. And if the task is difficult we can try to increase the iterations. I don't have a good answer on how many runs are enough. If you can only run N iterations should you run for example three runs with N/3 iterations or eight runs with N/8 iterations? More iterations is better but more runs is also better, to a point.

-alg specifies the metaheuristic algorithm used in the optimization. Currently there are three options: "nsga-ii", "spea2", "mutation" and "exhaustive". NSGA-II and SPEA2 are very good and well known algorithms. I got slightly better results with spea2 so it's the default. If you're interested in how these algorithms work you should check out the free ebook Essentials of Metaheuristics. The third option "mutation" is a very simple algorithm I wrote which only uses mutation. It can find a reasonably good result faster than the other algorithms but it will lose with large iteration counts. Finally we have the "exhaustive" option, it simply tries all the possible (and valid) parameter combinations. It can be useful if you only have a few parameters and can limit the number of values per parameter so that the number of combinations doesn't get too high. I have tried some other metaheuristic algorithms like CMA-ES, BFGS (Broyden–Fletcher–Goldfarb–Shanno) and SMPSO (particle swarm algorithm) but I didn't get as good results with them. The SMPSO is still waiting for a more thorough examination, it is promising. I should also note that the algorithms I'm using are not the basic variations, I have changed the way the mutations work and got better results that way.

-pop specifies the population size which is a term often used with genetic algorithms. It's basically how many individual results are kept in memory during the optimization. The genetic algorithms (like NSGA-II and SPEA2) work by doing crossovers between two individuals and then mutating (randomizing) the results slightly. The crossover operation takes some values from one individual and some from the other. The new individuals are rated and finally the best ones are selected as the new "generation". The default population size of 8 seems very small and maybe you're wondering why it should be small at all, after all it's not a problem to keep thousands or even millions of results in memory. Yes, in theory you should get better results with a larger population size but it does have a drawback: it makes the progress slower. If the population size is much larger than the size of the pareto front that means many less than optimal results are kept around and are used in the crossovers. Combining two bad results might create a very good individual but it's more likely to happen when combining two good results. But if you are going to run with a large iteration count then perhaps increasing the population size will also help. A larger population may also be needed with a difficult optimization task. If you want a reasonably good result fast use the "mutation" algorithm with a population size of 1.

-iters specifies the number of iterations. One iteration means one execution of the script we're trying to optimize. You can give the iteration count as a number (for example 1000) but there are other indirect ways. You can give a time limit in days, hours and minutes. For example 5h30m would run 5 hours and 30 minutes. 1d12h would be one day and 12 hours. You can use spaces if you put quotes around the value, for example "2h 45m". Using the time limit can be useful if you have a specific deadline for the results, or if you want to try what the optimizer can find during the night while you sleep. Just remember that the time limit applies to a single run, so if you start an optimization with 3 runs and 1h iterations it will take a total of 3 hours. During the optimization the maximum iteration count is still displayed on each result line but it is only an estimation.

Code:
  20 / 345 : 4.772059 20ms sigma=349 blockTemporal=1 blockSize=50 overlap=15
  21 / 347 : 4.815294 20ms sigma=477 blockTemporal=2 blockSize=64 overlap=15
  22 / 349 : 4.875304 20ms sigma=591 blockTemporal=2 blockSize=30 overlap=6
  23 / 350 : 4.880693 110ms sigma=800 blockTemporal=5 blockSize=61 overlap=7
  24 / 343 : 4.909643 150ms sigma=800 blockTemporal=5 blockSize=61 overlap=21
There also also two keywords that trigger a special dynamic iteration mode: "dyn" and "dynbk". dyn stands for "dynamic" and means the iteration count depends on how the optimization is progressing. There are two additional arguments that define how the dynamic iteration is behaving: -dynphases and -dyniters. -dynphases defines how many distinct "phases" the algorithm is using (default is 10 but it's displayed as "N/A" in the example since it does not apply to the chosen iteration method). The phase goes from 0.0 to 1.0 during the iteration and affects how large and common mutations are. In the beginning (phase 0.0) the mutations are large and applied to many parameters. In the end (phase 1.0) they are small and applied to few parameters (in general, you can also change that). If dynphases has a value of 10 the phase range is divided into 10 steps and the phases used are thus 0.0, 0.1, 0.2, ..., 0.9 and 1.0. The dynamic iteration stays at the current phase step as long as it's still making progress. The "not making progress" is triggered when there has not been new pareto front results in the last -dyniters iterations. In that case the algorithm moves to the next phase step and resets the counter. The default value of -dyniters is also 10. "dynbk" is much like "dyn" but it can also move backwards (hence the name, "dynamic backtracking") to the previous phase step. That happens whenever a new pareto front result is found. I have not done a comprehensive study on which of these algorithms gives better results. The dynamic iteration counts are useful when you are not limited by a specific deadline and just want to find the best result. When you're running a dynamic iteration the phase step changes are displayed in the console, for example:

Code:
6 iterations remaining is this generation
No improvement in 10 iterations - moving to phase 1/10
Also instead of displaying the maximum iteration number the current phase is displayed (there's no reliable way to estimate maximum iteration count):

Code:
  43 / 0,10 : 4.470519 20ms sigma=474 blockTemporal=-1 blockSize=22 overlap=0
  44 / 0,10 : 4.90395 60ms sigma=665 blockTemporal=3 blockSize=41 overlap=10
  45 / 0,10 : 4.757739 10ms sigma=490 blockTemporal=3 blockSize=32 overlap=0
-mutamount specifies the mutation amount. The amount is proportional to the allowed value range given in the script for each parameter you're optimizing. For example if you have a parameter with a value range 0..100 that would mean the parameter has 101 valid values. This number is multiplied by the mutation amount to get the largest possible change the mutation is allowed to make. So with mutation amount 0.2 the mutations would vary from -20.2 to 20.2. You can give two values for the mutation amount, the first is used in the beginning (phase 0.0) and the last in the end (phase 1.0) and linearly interpolated in between. If you only give one value that is used in all phases.

-mutcount specifies the mutation count. Whenever mutation is applied the first step is deciding how many parameters will be mutated and this argument defines just that. Like with -mutamount you can give a different value for the beginning and end phases. What's more the count can be given as a percentage of the number of optimized parameters in the script. So if you have 20 parameters to optimize and specify -mutcount 50% the algorithm will mutate 10 parameters. You can mix both presentations, for example the default -mutcount is "60% 1" which means mutating 60% of the parameters in the beginning and one in the end.

-crossprob specifies the probability of the crossover operation. If the probability is 1.0 the operation is applied to every new individual, if it's 0.0 it is never applied. In my tests I have found that this crossover argument is not that critical for a successful optimization.

-crossdist specifies the "distribution index" of the "simulated binary crossover" which is the crossover method used in NSGA-II and SPEA2. To be honest I don't fully understand what it does. I haven't investigated what value would be optimal for this argument.

-sensitivity specifies whether the sensitivity estimation algorithm is used. This algorithm is trying to determine how "sensitive" each parameter is, that is how much changing the parameter's value will affect the result. The sensitivity is then used by scaling the applied mutation amounts. The results are usually better when sensitivity estimation is on. If you want to switch it off set the value as "false".

Now you know how to change the optimization process. The default values are good most of the time but feel free to try different things. The most important arguments are probably -iters (and -dynphases and -dyniters if dynamic iteration is used), -runs, and -pop, followed by -mutamount and -mutcount. If you find good settings for a specific script please let me know.

In the next episode we will focus on the visualization of the results.

Last edited by zorr; 22nd November 2018 at 23:50. Reason: Added description of "exhaustive" algorithm.
zorr is offline   Reply With Quote
Old 3rd October 2018, 21:50   #53  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 213
Version 0.9.4-beta released. Some excessive logging removed in timed iteration mode.
zorr is offline   Reply With Quote
Old 9th October 2018, 10:13   #54  |  Link
Seedmanc
Registered User
 
Join Date: Sep 2010
Location: Russia
Posts: 88
Ok, so I gave it a try, here are the impressions.
First off, the Avstimer failed to load on Win10/Avisynth+MT, when loading via "loadplugin" it errored with "platform returned code 126: module not found", the way it reacts when I try to load a non-existant dll. I tried replacing avisynth.dll with the one from non-plus version, but it only changed the wording of the error.
Fortunately my main OS is Win7 with Avisynth 2.6MT installed where it worked, however Avstimer always returned time of 9999999ms. I suppose that made the optimization task much less efficient, because where I expected it to take tens of minutes for 10 720p frames in 3 runs, it took 15 minutes for the first run, 2.5 hours on second and 1.5 hours on third. I ran it with algo "mutation" and 100 iters. My system is Core i5 2550k OC'd to 4.3GHz, 16Gb RAM

Here's a script I used, modified from what you offered in the other thread:

Quote:
TEST_FRAMES = 10
MIDDLE_FRAME = 600

# original framerate
FPS_NUM = 30
FPS_DEN = 1

# source clip
Asrc=FFmpegSource2("f:\Hibikin - Watashtachi wa Zutto... Deshou (AVS test video 60fps 720p 10bit CRF0).mkv" ).assumefps(60)
Asrc=Asrc.trim(0,60*30-9)+Asrc.trim(60*40+60*60+9,0) # this is usually the part I worked with when manually adjusting the parameters before
asrc.selecteven
AssumeFPS(FPS_NUM, FPS_DEN)

#return last

# needed for some parameter combinations
ConvertToYV24()

orig = last


super_pel = 2 # optimize super_pel = _n_ | 2,4 | super_pel
super_sharp = 2 # optimize super_sharp = _n_ | 0..2 | super_sharp
super_rfilter = 4 # optimize super_rfilter = _n_ | 0..4 | super_rfilter
super_render = MSuper(pel=super_pel, sharp=super_sharp, rfilter=super_rfilter, orig )

blockSize = 32 # optimize blockSize = _n_ | 4,6,8,12,16,24,32,48,64 ; min:divide 0 > 8 2 ? ; filterverlap 2 * x <= | blockSize
searchAlgo = 3 # optimize searchAlgo = _n_ | 0..7 D | searchAlgo
searchRange = 4 # optimize searchRange = _n_ | 1..30 | searchRange
searchRangeFinest = 4 # optimize searchRangeFinest = _n_ | 1..60 | searchRangeFinest
lambda = 16000 # optimize lambda = _n_ | 0..20000 | lambda
lsad=400 # optimize lsad=_n_ | 8..20000 | lsad
pnew=0 # optimize pnew=_n_ | 0..256 | pnew
plevel=0 # optimize plevel=_n_ | 0..2 | plevel
overlap=16 # optimize overlap=_n_ | 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32 ; max:blockSize 2 / ; filter:x divide 0 > 4 2 ? % 0 == | overlap
divide=2 # optimize divide=_n_ | 0..2 ; max:blockSize 8 >= 2 0 ? overlap 4 % 0 == 2 0 ? min | divide
globalMotion = true # optimize globalMotion = _n_ | false,true | globalMotion
badSAD = 2000 # optimize badSAD = _n_ | 4..10000 | badSAD
badRange = 24 # optimize badRange = _n_ | 4..50 | badRange
meander = true # optimize meander = _n_ | false,true | meander
temporal = false # optimize temporal = _n_ | false,true | temporal
trymany = false # optimize trymany = _n_ | false,true | trymany
dct = 0 # optimize dct = _n_ | 0,2,3,4,5,6,7,8,9,10 D | dct

delta = 1
useChroma = true
bv = MAnalyse(super_render, isb = true, blksize=blockSize, search=searchAlgo, searchparam=searchRange, pelsearch=searchRangeFinest,
\ chroma=useChroma, delta=delta, lambda=lambda, lsad=lsad, pnew=pnew, plevel=plevel, global=globalMotion, overlap=overlap ,
\ divide=divide, badSAD=badSAD, badrange=badRange, meander=meander, temporal=temporal, trymany=trymany, dct=dct)
fv = MAnalyse(super_render, isb = false, blksize=blockSize, search=searchAlgo, searchparam=searchRange, pelsearch=searchRangeFinest,
\ chroma=useChroma, delta=delta, lambda=lambda, lsad=lsad, pnew=pnew, plevel=plevel, global=globalMotion, overlap=overlap ,
\ divide=divide, badSAD=badSAD, badrange=badRange, meander=meander, temporal=temporal, trymany=trymany, dct=dct)



threshold = 10000
maskScale = 100 # optimize maskScale = _n_ | 1..300 | maskScale
mask_fps = 2 # optimize mask_fps = _n_ | 0..2 | mask_fps
inter = orig.MFlowFPS(super_render, bv, fv, num=FPS_NUM*2, den=FPS_DEN, mask=mask_fps, ml=maskScale, thSCD1=threshold )

# return this to look at the clip with doubled framerate
#return inter

fps_only = inter.SelectOdd()

# second pass
super_render2 = MSuper(pel=super_pel, sharp=super_sharp, rfilter=super_rfilter, fps_only )
bv2 = MAnalyse(super_render2, isb = true, blksize=blockSize, search=searchAlgo, searchparam=searchRange, pelsearch=searchRangeFinest,
\ chroma=useChroma, delta=delta, lambda=lambda, lsad=lsad, pnew=pnew, plevel=plevel, global=globalMotion, overlap=overlap,
\ divide=divide, badSAD=badSAD, badrange=badRange, meander=meander, temporal=temporal, trymany=trymany)
fv2 = MAnalyse(super_render2, isb = false, blksize=blockSize, search=searchAlgo, searchparam=searchRange, pelsearch=searchRangeFinest,
\ chroma=useChroma, delta=delta, lambda=lambda, lsad=lsad, pnew=pnew, plevel=plevel, global=globalMotion, overlap=overlap,
\ divide=divide, badSAD=badSAD, badrange=badRange, meander=meander, temporal=temporal, trymany=trymany)
inter2 = fps_only.MFlowFPS(super_render2, bv2, fv2, num=FPS_NUM*2, den=FPS_DEN, mask=mask_fps, ml=maskScale, thSCD1=threshold )
fps_only2 = inter2.SelectOdd()


delimiter = "; "

inter_yv12 = fps_only2.ConvertToYV12()
orig_yv12 = orig.ConvertToYV12()

# for comparison original must be forwarded one frame
orig_yv12 = trim(orig_yv12,1,0)

inter_yv12 = inter_yv12.Trim(MIDDLE_FRAME - TEST_FRAMES/2 + (TEST_FRAMES%2==0?1:0), MIDDLE_FRAME + TEST_FRAMES/2)
orig_yv12 = orig_yv12.Trim(MIDDLE_FRAME - TEST_FRAMES/2 + (TEST_FRAMES%2==0?1:0), MIDDLE_FRAME + TEST_FRAMES/2)
last = inter_yv12

global total = 0.0
global ssim_total = 0.0
global avstimer = 0.0
frame_count = FrameCount()
FrameEvaluate(last, """
global ssim = SSIM_FRAME(orig_yv12, inter_yv12)
global ssim_total = ssim_total + (ssim == 1.0 ? 0.0 : ssim)
""", args="orig_yv12, inter_yv12, delta, frame_count")

# NOTE: AvsTimer call should be before the WriteFile call
AvsTimer(frames=1, type=0, total=false, name="Optimizer")

# per frame logging (ssim, time)
resultFile = "f:\avsoptim\results\perFrameResults.txt" # output out1="ssim: MAX(float)" out2="time: MIN(time) ms" file="f:\avsoptim\results\perFrameResults.txt"
WriteFile(resultFile, "current_frame", "delimiter", "ssim", "delimiter", "avstimer")
WriteFileIf(resultFile, "current_frame == frame_count-1", """ "stop " """, "ssim_total", append=true)

return last
A few notes:
1) I dropped the RemoveGrain call since my sources are CGI and clean enough already.
2) The FrameEvaluate you use doesn't seem to be the native one, as it complained that it doesn't have the argument "args". It worked when I installed GScript, you might want to add that to the list of dependencies.
3) Shouldn't the optimizer params description for DCT include the D flag, since, much like the searchAlgo param, it is "non-linear" and we can't make assumptions about the value and effect? I put the flag there.

The results were confusing.
Quote:
Run 1 best: 9.534311 9999999 super_pel=4 super_sharp=1 super_rfilter=2 blockSize=8 searchAlgo=4 searchRange=4 searchRangeFinest=8 lambda=0 lsad=461 pnew=255 plevel=1 overlap=0 divide=0 globalMotion=false badSAD=1424 badRange=50 meander=true temporal=true trymany=false dct=8 maskScale=123 mask_fps=2
Run 2 best: 9.535164 9999999 super_pel=4 super_sharp=0 super_rfilter=0 blockSize=12 searchAlgo=4 searchRange=28 searchRangeFinest=60 lambda=1855 lsad=7420 pnew=136 plevel=0 overlap=6 divide=0 globalMotion=true badSAD=6286 badRange=14 meander=false temporal=true trymany=true dct=6 maskScale=177 mask_fps=2
Run 3 best: 9.553077 9999999 super_pel=4 super_sharp=0 super_rfilter=3 blockSize=12 searchAlgo=4 searchRange=1 searchRangeFinest=34 lambda=18163 lsad=8 pnew=9 plevel=2 overlap=6 divide=0 globalMotion=true badSAD=9869 badRange=10 meander=true temporal=false trymany=true dct=5 maskScale=23 mask_fps=2
I can see some tendencies when analyzing the logs manually, sorting by the SSIM so there's that at least. Visually, however, I can't say it looks better than what the hand-picked parameters provide, but that's to be expected from a first attempt with a low iter count.
I think the large search space (22 parameters) might have affected it too. I intend to run this again overnight with whatever iter count it manages to do in time, but first I need to have the Avstimer fixed.
Maybe I should leave the truemotion tuning to later and first try to see what more common set of parameters this tool can generate to compare with the manually tuned ones. Then once I have the good params fixed, I can have it experiment with truemotion.

Another question I wanted to ask, is there a way to estimate the amount of time/iters/population required for a certain amount of tunable script parameters involved? I don't really understand much the math involved, but I have a feeling that either the population size or the iters count should scale up with the parameter count somehow. Seeing how the first run took 15 minutes while the second one 2.5 hours it means not every run will even attempt to cover all parameters involved, since obviously the first one omitted the slowest stuff like DCT 4. But then you said Mutation algo can work with population 1.

I assume, if we were using brute-force instead of metaheuristics, then for every new parameter introduced (or a value added to a list of possible values for existing parameters) the total combination amount would double. Is there a way to roughly estimate the effect of parameter addition here?

Last edited by Seedmanc; 9th October 2018 at 13:43. Reason: grammar
Seedmanc is offline   Reply With Quote
Old 9th October 2018, 12:39   #55  |  Link
vcmohan
Registered User
 
Join Date: Jul 2003
Location: India
Posts: 733
I tried to understand this thread but could not. However broadly I find that various parameters for a VHS conversion are attempted so as to optimize the FPS and may be other parameters results of which are not quantizable. I also have seen mention of use of an example frame or frames to arrive at the desired quality.

In oil exploration where a large number of parameters which can vary independantly over some ranges are used to estimate reserves of oil present. Often Monte Carlo simulation is used.

In recent times AI is being used extensively for solving various problems. ANN s can use thousands of parameters to arrive at a solution which can mimic as close as possible an example.

May be tried if applicable. If my suggestion is absolutely off track please ignore the post.
__________________
mohan
my plugins are now hosted here
vcmohan is offline   Reply With Quote
Old 9th October 2018, 14:42   #56  |  Link
Groucho2004
 
Groucho2004's Avatar
 
Join Date: Mar 2006
Location: A wretched hive of scum and villainy
Posts: 4,357
Quote:
Originally Posted by Seedmanc View Post
First off, the Avstimer failed to load on Win10/Avisynth+MT, when loading via "loadplugin" it errored with "platform returned code 126: module not found"
The avstimer 32 bit plugin that zorr provides is linked against MSVCR71.DLL which is a runtime DLL that is necessary for dynamically linked VC 7.1 binaries. I can only assume that zorr took the original project file and did not modify it for newer versions of VC.
__________________
Groucho's Avisynth Stuff
Groucho2004 is offline   Reply With Quote
Old 9th October 2018, 16:38   #57  |  Link
Seedmanc
Registered User
 
Join Date: Sep 2010
Location: Russia
Posts: 88
Quote:
Originally Posted by Groucho2004 View Post
The avstimer 32 bit plugin that zorr provides is linked against MSVCR71.DLL which is a runtime DLL that is necessary for dynamically linked VC 7.1 binaries. I can only assume that zorr took the original project file and did not modify it for newer versions of VC.
Thanks, that fixed it. The problem with the incorrect time being reported still remains though.
Seedmanc is offline   Reply With Quote
Old 9th October 2018, 22:25   #58  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 213
Quote:
Originally Posted by Groucho2004 View Post
The avstimer 32 bit plugin that zorr provides is linked against MSVCR71.DLL which is a runtime DLL that is necessary for dynamically linked VC 7.1 binaries. I can only assume that zorr took the original project file and did not modify it for newer versions of VC.
That's correct. I don't know much about building Visual Studio projects and the myriad of dll versions. I only have 32bit Avisynth installed so my testing is limited to that. Should the AVSTimer be modified to use some other dll version?
zorr is offline   Reply With Quote
Old 9th October 2018, 22:40   #59  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 6,949
For anybody that needs, here VS CPP v7.0 and v7.1 dll's,

http://www.mediafire.com/file/1220u8...imes.rar/file#

EDIT: Put in system32, or SysWow64 if 64 bit.

EDIT: MSVCR70.DLL (vs 2002) and MSVCR71.DLL (vs 2003).
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 9th October 2018 at 22:48.
StainlessS is offline   Reply With Quote
Old 9th October 2018, 23:43   #60  |  Link
Groucho2004
 
Groucho2004's Avatar
 
Join Date: Mar 2006
Location: A wretched hive of scum and villainy
Posts: 4,357
There's no reason whatsoever to link a current binary against these ancient 7.x DLLs which cause nothing but grief.

zorr, post your current project, I'll have a look.

I posted makefiles with the modified code which have the correct compiler and linker settings which you can use to build the DLLs from the command line.
__________________
Groucho's Avisynth Stuff
Groucho2004 is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:23.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.