Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
23rd September 2018, 01:33 | #41 | Link | |
Join Date: Mar 2006
Location: Barcelona
Posts: 5,034
|
Quote:
If the registry entry is just an orphan and avisynth.dll is not present, the application loading Avisynth will (well, should) throw an appropriate error.
__________________
Groucho's Avisynth Stuff Last edited by Groucho2004; 23rd September 2018 at 09:25. |
|
23rd September 2018, 02:40 | #42 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
OK, was just going by the fact that many other apps leave their keys insitu on uninstall.
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? |
23rd September 2018, 09:10 | #43 | Link | ||
Pig on the wing
Join Date: Mar 2002
Location: Finland
Posts: 5,731
|
Quote:
Quote:
__________________
And if the band you're in starts playing different tunes I'll see you on the dark side of the Moon... |
||
23rd September 2018, 23:29 | #44 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
I wanted to get rid of the polling so instead I tried waiting for the avsr process to finish until I start reading the output file. That works but is about 10% slower (when running a validation with a fast script) than the polling method.
Another thing I'm working on is improved error handling. The latest avsr will output the script error messages into standard error stream which I can read from the optimizer. However that same 10% penalty comes when I read avsr's output streams fully before I start checking the output file. That makes sense since reading the streams until nothing else is coming is pretty much the same thing as waiting for the process to finish. So what I'm going to try next is reading the streams while I'm polling the output file. In that way I can hopefully enable the improved error handling and keep the faster execution speed. Apologies for the slow progress. I recently bought a house and now I have to plan the moving and renovations... Last edited by zorr; 23rd September 2018 at 23:31. Reason: wrong word used |
24th September 2018, 13:11 | #45 | Link | |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,555
|
Quote:
2. vspipe has --arg key=value which can set variables inside the script 3. per frame or for the whole thing? not sure what you need here 4. depends on the answer in 3, I plan to add the possibility to dump all frame properties as json in a future vspipe release
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
|
25th September 2018, 21:31 | #46 | Link | |||
Registered User
Join Date: Mar 2018
Posts: 447
|
Quote:
Quote:
Quote:
It doesn't need to happen within vspipe, the script could write the file. At least that's how it works with Avisynth. |
|||
25th September 2018, 23:14 | #47 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
I considered this but reading the Windows registry from Java probably isn't worth the hassle (it requires external libraries). I think I will settle for a simple OS check - on a 32bit windows Avisynth is 32bit too. On a 64bit Windows I will ask the Avisynth architecture on the first run (or if information no longer present in .ini file). And the default value can be overridden with -arch parameter (thanks davidhorman for the suggestion).
|
29th September 2018, 22:43 | #48 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
New version 0.9.2-beta is released. I have changed the download links to point to the new version.
This version no longer uses VirtualDub to run the scripts, instead it's using Groucho2004's excellent avsr utility which is included. The error handling is improved and any script errors will be displayed on the console window. I will have to change the tutorial to reflect these changes. Should I just edit the original messages or post a completely new version of the tutorial? If I change the original it will be easier for people who start reading from the beginning but it will be difficult to understand the discussion that follows. |
29th September 2018, 23:43 | #49 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
Just update original posts, and post advisory that is updated.
It is your thread to do with as you please, within reason.
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? |
30th September 2018, 22:34 | #50 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
I have updated the Hands-on tutorial to match the features of the latest AvisynthOptimizer version.
|
1st October 2018, 21:18 | #51 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
Version 0.9.3-beta released. The only change is that avsr was upgraded to latest version 0.1.7.
|
2nd October 2018, 23:59 | #52 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
Optimizer arguments
It's time to take a closer look at how to adjust the optimization process. Let's run the optimizer using the same script and settings used in the last tutorial:
Code:
optimizer <path_to_your_script> -iters 100 Code:
Arguments iters = 100 Running optimization for script d:/optimizer/test/flower/denoise.avs Using these settings: ARGUMENT DESCRIPTION VALUE -runs runs 5 -alg algorithm spea2 -pop population 8 -iters iterations 100 -mutamount mutation amount 0.3 0.01 -mutcount mutation count 60% 1 -crossprob crossover probability 0.1 -crossdist crossover distribution 20 -sensitivity sensitivity estimation true -dynphases dynamic phases N/A -dyniters iterations per phase N/A The "Arguments" section lists the arguments and their values as they were understood by the optimizer. The next section is a handy cheat sheet on what arguments are available and their current values. The first column ARGUMENT tells the argument name you can use to specify the setting. The DESCRIPTION column contains a short description of what the argument does. And finally the VALUE is the current value used for the argument. Most of these are using the default values, we only specified the -iters argument. If you run the optimizer in another mode (like "evaluate") the listed arguments are specific to that mode. I spent quite a while figuring out good default values so they should work reasonably well, but I have only tested them on a few different optimization tasks so they might not be good for every case. It takes a lot of effort to test these settings because to determine if one value is better than another one should run the optimization task many times with each value in order to gain enough statistical significance. I mostly used 20 runs per parameter value. Let's take a look at the arguments one by one. -runs specifies the number of optimization runs. a "run" is one complete optimization cycle which itself is specified with the -iters argument. I talked about the need for multiple runs earlier but I will repeat the points here: Since the optimization process is depending on random numbers the outcome is not always the same and there can be large differences in the final result. If you only run the optimization once you cannot be really sure whether the results are good or bad. Another useful aspect of multiple runs is that the variance of the best result can tell us about how easy or hard this optimization task is. Large variance means difficult task. And if the task is difficult we can try to increase the iterations. I don't have a good answer on how many runs are enough. If you can only run N iterations should you run for example three runs with N/3 iterations or eight runs with N/8 iterations? More iterations is better but more runs is also better, to a point. -alg specifies the metaheuristic algorithm used in the optimization. Currently there are three options: "nsga-ii", "spea2", "mutation" and "exhaustive". NSGA-II and SPEA2 are very good and well known algorithms. I got slightly better results with spea2 so it's the default. If you're interested in how these algorithms work you should check out the free ebook Essentials of Metaheuristics. The third option "mutation" is a very simple algorithm I wrote which only uses mutation. It can find a reasonably good result faster than the other algorithms but it will lose with large iteration counts. Finally we have the "exhaustive" option, it simply tries all the possible (and valid) parameter combinations. It can be useful if you only have a few parameters and can limit the number of values per parameter so that the number of combinations doesn't get too high. I have tried some other metaheuristic algorithms like CMA-ES, BFGS (Broyden–Fletcher–Goldfarb–Shanno) and SMPSO (particle swarm algorithm) but I didn't get as good results with them. The SMPSO is still waiting for a more thorough examination, it is promising. I should also note that the algorithms I'm using are not the basic variations, I have changed the way the mutations work and got better results that way. -pop specifies the population size which is a term often used with genetic algorithms. It's basically how many individual results are kept in memory during the optimization. The genetic algorithms (like NSGA-II and SPEA2) work by doing crossovers between two individuals and then mutating (randomizing) the results slightly. The crossover operation takes some values from one individual and some from the other. The new individuals are rated and finally the best ones are selected as the new "generation". The default population size of 8 seems very small and maybe you're wondering why it should be small at all, after all it's not a problem to keep thousands or even millions of results in memory. Yes, in theory you should get better results with a larger population size but it does have a drawback: it makes the progress slower. If the population size is much larger than the size of the pareto front that means many less than optimal results are kept around and are used in the crossovers. Combining two bad results might create a very good individual but it's more likely to happen when combining two good results. But if you are going to run with a large iteration count then perhaps increasing the population size will also help. A larger population may also be needed with a difficult optimization task. If you want a reasonably good result fast use the "mutation" algorithm with a population size of 1. -iters specifies the number of iterations. One iteration means one execution of the script we're trying to optimize. You can give the iteration count as a number (for example 1000) but there are other indirect ways. You can give a time limit in days, hours and minutes. For example 5h30m would run 5 hours and 30 minutes. 1d12h would be one day and 12 hours. You can use spaces if you put quotes around the value, for example "2h 45m". Using the time limit can be useful if you have a specific deadline for the results, or if you want to try what the optimizer can find during the night while you sleep. Just remember that the time limit applies to a single run, so if you start an optimization with 3 runs and 1h iterations it will take a total of 3 hours. During the optimization the maximum iteration count is still displayed on each result line but it is only an estimation. Code:
20 / 345 : 4.772059 20ms sigma=349 blockTemporal=1 blockSize=50 overlap=15 21 / 347 : 4.815294 20ms sigma=477 blockTemporal=2 blockSize=64 overlap=15 22 / 349 : 4.875304 20ms sigma=591 blockTemporal=2 blockSize=30 overlap=6 23 / 350 : 4.880693 110ms sigma=800 blockTemporal=5 blockSize=61 overlap=7 24 / 343 : 4.909643 150ms sigma=800 blockTemporal=5 blockSize=61 overlap=21 Code:
6 iterations remaining is this generation No improvement in 10 iterations - moving to phase 1/10 Code:
43 / 0,10 : 4.470519 20ms sigma=474 blockTemporal=-1 blockSize=22 overlap=0 44 / 0,10 : 4.90395 60ms sigma=665 blockTemporal=3 blockSize=41 overlap=10 45 / 0,10 : 4.757739 10ms sigma=490 blockTemporal=3 blockSize=32 overlap=0 -mutcount specifies the mutation count. Whenever mutation is applied the first step is deciding how many parameters will be mutated and this argument defines just that. Like with -mutamount you can give a different value for the beginning and end phases. What's more the count can be given as a percentage of the number of optimized parameters in the script. So if you have 20 parameters to optimize and specify -mutcount 50% the algorithm will mutate 10 parameters. You can mix both presentations, for example the default -mutcount is "60% 1" which means mutating 60% of the parameters in the beginning and one in the end. -crossprob specifies the probability of the crossover operation. If the probability is 1.0 the operation is applied to every new individual, if it's 0.0 it is never applied. In my tests I have found that this crossover argument is not that critical for a successful optimization. -crossdist specifies the "distribution index" of the "simulated binary crossover" which is the crossover method used in NSGA-II and SPEA2. To be honest I don't fully understand what it does. I haven't investigated what value would be optimal for this argument. -sensitivity specifies whether the sensitivity estimation algorithm is used. This algorithm is trying to determine how "sensitive" each parameter is, that is how much changing the parameter's value will affect the result. The sensitivity is then used by scaling the applied mutation amounts. The results are usually better when sensitivity estimation is on. If you want to switch it off set the value as "false". Now you know how to change the optimization process. The default values are good most of the time but feel free to try different things. The most important arguments are probably -iters (and -dynphases and -dyniters if dynamic iteration is used), -runs, and -pop, followed by -mutamount and -mutcount. If you find good settings for a specific script please let me know. In the next episode we will focus on the visualization of the results. Last edited by zorr; 23rd November 2018 at 00:50. Reason: Added description of "exhaustive" algorithm. |
3rd October 2018, 21:50 | #53 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
Version 0.9.4-beta released. Some excessive logging removed in timed iteration mode.
|
9th October 2018, 10:13 | #54 | Link | ||
Registered User
Join Date: Sep 2010
Location: Russia
Posts: 85
|
Ok, so I gave it a try, here are the impressions.
First off, the Avstimer failed to load on Win10/Avisynth+MT, when loading via "loadplugin" it errored with "platform returned code 126: module not found", the way it reacts when I try to load a non-existant dll. I tried replacing avisynth.dll with the one from non-plus version, but it only changed the wording of the error. Fortunately my main OS is Win7 with Avisynth 2.6MT installed where it worked, however Avstimer always returned time of 9999999ms. I suppose that made the optimization task much less efficient, because where I expected it to take tens of minutes for 10 720p frames in 3 runs, it took 15 minutes for the first run, 2.5 hours on second and 1.5 hours on third. I ran it with algo "mutation" and 100 iters. My system is Core i5 2550k OC'd to 4.3GHz, 16Gb RAM Here's a script I used, modified from what you offered in the other thread: Quote:
1) I dropped the RemoveGrain call since my sources are CGI and clean enough already. 2) The FrameEvaluate you use doesn't seem to be the native one, as it complained that it doesn't have the argument "args". It worked when I installed GScript, you might want to add that to the list of dependencies. 3) Shouldn't the optimizer params description for DCT include the D flag, since, much like the searchAlgo param, it is "non-linear" and we can't make assumptions about the value and effect? I put the flag there. The results were confusing. Quote:
I think the large search space (22 parameters) might have affected it too. I intend to run this again overnight with whatever iter count it manages to do in time, but first I need to have the Avstimer fixed. Maybe I should leave the truemotion tuning to later and first try to see what more common set of parameters this tool can generate to compare with the manually tuned ones. Then once I have the good params fixed, I can have it experiment with truemotion. Another question I wanted to ask, is there a way to estimate the amount of time/iters/population required for a certain amount of tunable script parameters involved? I don't really understand much the math involved, but I have a feeling that either the population size or the iters count should scale up with the parameter count somehow. Seeing how the first run took 15 minutes while the second one 2.5 hours it means not every run will even attempt to cover all parameters involved, since obviously the first one omitted the slowest stuff like DCT 4. But then you said Mutation algo can work with population 1. I assume, if we were using brute-force instead of metaheuristics, then for every new parameter introduced (or a value added to a list of possible values for existing parameters) the total combination amount would double. Is there a way to roughly estimate the effect of parameter addition here? Last edited by Seedmanc; 9th October 2018 at 13:43. Reason: grammar |
||
9th October 2018, 12:39 | #55 | Link |
Registered User
Join Date: Jul 2003
Location: India
Posts: 890
|
I tried to understand this thread but could not. However broadly I find that various parameters for a VHS conversion are attempted so as to optimize the FPS and may be other parameters results of which are not quantizable. I also have seen mention of use of an example frame or frames to arrive at the desired quality.
In oil exploration where a large number of parameters which can vary independantly over some ranges are used to estimate reserves of oil present. Often Monte Carlo simulation is used. In recent times AI is being used extensively for solving various problems. ANN s can use thousands of parameters to arrive at a solution which can mimic as close as possible an example. May be tried if applicable. If my suggestion is absolutely off track please ignore the post. |
9th October 2018, 14:42 | #56 | Link |
Join Date: Mar 2006
Location: Barcelona
Posts: 5,034
|
The avstimer 32 bit plugin that zorr provides is linked against MSVCR71.DLL which is a runtime DLL that is necessary for dynamically linked VC 7.1 binaries. I can only assume that zorr took the original project file and did not modify it for newer versions of VC.
__________________
Groucho's Avisynth Stuff |
9th October 2018, 16:38 | #57 | Link | |
Registered User
Join Date: Sep 2010
Location: Russia
Posts: 85
|
Quote:
|
|
9th October 2018, 22:25 | #58 | Link | |
Registered User
Join Date: Mar 2018
Posts: 447
|
Quote:
|
|
9th October 2018, 22:40 | #59 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
For anybody that needs, here VS CPP v7.0 and v7.1 dll's,
http://www.mediafire.com/file/1220u8...imes.rar/file# EDIT: Put in system32, or SysWow64 if 64 bit. EDIT: MSVCR70.DLL (vs 2002) and MSVCR71.DLL (vs 2003).
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? Last edited by StainlessS; 9th October 2018 at 22:48. |
9th October 2018, 23:43 | #60 | Link |
Join Date: Mar 2006
Location: Barcelona
Posts: 5,034
|
There's no reason whatsoever to link a current binary against these ancient 7.x DLLs which cause nothing but grief.
zorr, post your current project, I'll have a look. I posted makefiles with the modified code which have the correct compiler and linker settings which you can use to build the DLLs from the command line.
__________________
Groucho's Avisynth Stuff |
Thread Tools | Search this Thread |
Display Modes | |
|
|