Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
14th October 2018, 23:24 | #81 | Link | ||||||
Registered User
Join Date: Mar 2018
Posts: 447
|
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Another fun (or not) idea that is already possible would be to find the slowest possible parameter combinations... perhaps combined with the worst quality too! Thanks for the suggestion, I have updated the first post. Last edited by zorr; 14th October 2018 at 23:38. Reason: More about the maximum search radius |
||||||
17th October 2018, 23:19 | #82 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
AvisynthOptimizer v0.9.6-beta released. This version has improvements to mutation algorithm. It now supports sensitivity estimation, has colored console output that is more like the other algorithms and has the correct number of script evaluations when using a fixed iteration count.
Seedmanc, do you mind if I create an MVTools2 bug report about the issue you encountered? Or would you like to do it yourself? Last edited by zorr; 17th October 2018 at 23:20. Reason: Adjusted link text |
18th October 2018, 09:18 | #83 | Link | |
Registered User
Join Date: Sep 2010
Location: Russia
Posts: 85
|
zorr, yes, I would rather have you do it, please go ahead.
Does the new version still require full path to the avs script? On another note, in the FRC thread you mentioned this: Quote:
|
|
18th October 2018, 12:52 | #84 | Link | |
Registered User
Join Date: Jan 2014
Posts: 2,309
|
Quote:
The reason why "temporal" is not multithreading friendly is that it requires linear frame access: there is only a single internal buffer that holds the vectors from previous frame. Previous vectors are used _only_ if frame order is linear from MAnalyze point of view: the current frame number = previously analyzed frame number + 1. So the word "multithreading" here refers to the Avisynth-level multithreading schema. MAnalyze automatically reports to work in MT_MULTI_INSTANCE mode under Avisynth+. Perhaps MAnalyze could adaptively report MT_SERIALIZED when temporal=true is set. |
|
18th October 2018, 23:34 | #85 | Link | ||
Registered User
Join Date: Mar 2018
Posts: 447
|
Ok, I will do some more investigation and then create the bug report.
Quote:
Quote:
But if we first create the inbetween frame with MFlowInter / MFlowFps and then use MCompensate to reconstruct that frame using two nearby frames something magical happens... Here we see the original frame (orig), a reconstructed frame from MFlowInter (inter) and finally the MCompensated frame (final). |
||
18th October 2018, 23:43 | #86 | Link | |
Registered User
Join Date: Mar 2018
Posts: 447
|
Quote:
|
|
19th October 2018, 23:29 | #88 | Link | |
Registered User
Join Date: Mar 2018
Posts: 447
|
Quote:
|
|
22nd October 2018, 22:44 | #89 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
AvisynthOptimizer version 0.9.7-beta released.
The source file path (or any path) doesn't need to be an absolute file path anymore. This is implemented by calling SetWorkingDir() in the beginning of the script to set the working directory as the original script's directory. Seedmanc, was this the issue you asked about, I may have misunderstood because you said "full path to the avs script"... Thanks to Pinterf the crash issue is fixed in the latest MVTools2 version. I have run some 43 000 tests and found no crashes. Last edited by zorr; 22nd October 2018 at 22:47. Reason: Misspelled word |
29th October 2018, 20:44 | #90 | Link |
Registered User
Join Date: Sep 2010
Location: Russia
Posts: 85
|
By re-enabling MT and trimming the required 10-frame clip into a separate video I've managed to speed up things from around 150 iters per hour to a whole thousand, and I can finally see it converging to a more or less singular set of parameters overnight.
However, there are still problems. It seems the nature of SSIM makes preference of sharp lines to textures and fills, which makes the results very biased sometimes. For example, it always prefers pel=4 over pel=2, despite it being said in the readme that it's not necessarily better, especially considering the time penalty. Apparently pel=2 gives somewhat aliased edges, hardly noticeable by eye, but too important for the metric. Another problem is that it also prefers sharp=0 to sharp=2, even when it's clearly visible that the former looks considerably blurrier than the original video. Perhaps it is again due to the extra attention to lines, especially with the double upsampling method used here, the halos around edges become extra prominent. Though it's not just about SSIM, when comparing PSNR or VQM (the latter uses DCTs for comparison), using the MSU VQMT software, it was noticeable how the graphs align in parallel to each other, as if sharp=2 incurred a constant penalty in the metrics value, independent of the scene complexity. Another reason might be that I'm testing it on a 2D animation (or rather, 3D CGI which is cellshaded to look 2D), which means lots of very sharp edges with flat fills around them. In this situation, a mere half-pixel shift of an edge makes a lot more difference (relatively) than dozens of pixels away if the background is the same color. I can't be bothered to test if the problem is as strong on real footages, though. What I tried to do, however is to obscure the influence of sharpness-related options by downscaling the video just before passing it to SSIM. 1/2 was not enough, but 1/4 by each side did the trick - sharpness or aliasing no longer affected the metrics, resulting in pel=2 and sharp=2 getting about the same share in the results pool as other values. When calculating total SSIM over entire video and comparing visually, it looks like the sharp option does not really affect the efficiency of frame interpolation in any way, meanwhile pel=2 actually looked somewhat better and got a better SSIM than pel=4. Not very significant on its own, but considering the speed difference with pel=4, important. Among other troublesome parameters, there are also divide and overlap. The double upsample method used here causes the SSIM to always be higher for overlap=0 and no divide. Meanwhile overlap pretty much universally gives better SSIM and visuals when comparing directly to original frames, and divide sometimes looks better as well. I couldn't find a solution here, downscaling didn't help, nor I can explain what might be throwing SSIM off in that case. Really, how can SSIM of this (overlap 0) be higher than of this (overlap half)? Ok, I need to clarify the images here, I split the video by half in duration and stack up vertically so I only have to go through 5 frames manually instead of 10 when comparing. Then, on the left half is the video after double upsampling and the SSIM for it compared to the original frames, while on the right it's after single upsampling (how it should be) and compared to discarded frames (original video is 60 so I can drop half and still get a reasonable source framerate). As you can see, SSIM on the left is inversely proportional to the actual video quality as opposed to SSIM on the right. I suppose I'll have to fix overlap to half the blocksize in the script itself, but I'm disappointed it hates divide so much. A few more notes, the Divide parameter should be marked with the D flag, since divide=2 isn't really any "more divided" than 1, just different modes. I also added padding parameter for MSuper and the new parameter scaleCSAD, added in 2.7, which seems to improve quality when set to positive value (and the optimizer indeed chooses the maximum value for it). However neither DCT, nor searchalgo or padding converge to any particular values even after 3000 iterations and unlocking dct=1 (I don't think I saw it choosing 1 at all). I'm going to try modifying the script so that it compares to the original discarded frames to get rid of the mistakes introduced by the double upsampling and see if it's gonna be better. Here's a Google Spreadsheets link where I tried to analyze (for the lack of a better way) results from several 3run*3000iter trials with downscaling and without, comparing the distribution of divide, sharp and pel parameters. I couldn't quite figure out how to make use of the visualizer's groupby method, so I had to come up with my own. zorr, I mean the need to provide full path to .avs when calling the optimizer even if they're in the same directory. I'd like to request a way to only generate scripts for the best results of every run instead of the entire pareto front. Usually when AvsOptim finishes I end up manually comparing the run results (with the image setup of above) among themselves and with the handpicked best results from previous runs. As of now it requires a lot of manual parameter editing to match the run results reported by Evaluate mode. Also, I wonder if it would be possible to manually provide one of the generated population members so that a new run could start with one handpicked best parameter set among others and perhaps try to improve on top of that. For example, the optimizer would use the values assigned to var in the script (before the # optimize part) as one of the population members. Does that even make sense? Last edited by Seedmanc; 29th October 2018 at 20:49. |
30th October 2018, 02:04 | #91 | Link | |||||||||||||||||
Registered User
Join Date: Mar 2018
Posts: 447
|
Quote:
Quote:
Quote:
Quote:
Quote:
There's also a SSIM variation called Multiscale SSIM (MS-SSIM) that actually calculates the SSIM using different scales. I think that would be a good improvement on the quality measurement. It might be possible to implement MS-SSIM using just an avisynth function. Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
That was brave, unlocking dct=1. But it's definitely there when I looked at your Google spreadsheet, it's even the best result of full run 1. Quote:
May I ask you why you're doing this frame doubling to a video with 60fps rate? Quote:
But just a quickie here, you could for example run Code:
optimizer -mode evaluate -log "../scripts/script*.log" -groupby super_pel -vismode series Quote:
Quote:
Quote:
Last edited by zorr; 30th October 2018 at 21:50. Reason: Fixed the name of the latest optimizer version |
|||||||||||||||||
30th October 2018, 22:04 | #92 | Link |
Registered User
Join Date: Mar 2018
Posts: 447
|
Already mentioned in the post above but let's make it official:
AvisynthOptimizer v0.9.8-beta released. This version adds new modes to -scripts parameter:
There are also some improvements to groupby -functionality: -multiple scripts with different values for a parameter can be analyzed -maxgroups also works with parameter values given as a list (previously had to be a value range) |
31st October 2018, 21:38 | #93 | Link | ||||
Registered User
Join Date: Sep 2010
Location: Russia
Posts: 85
|
Quote:
Quote:
Quote:
Quote:
https://pastebin.com/uWa1msZ1 - here's an average script I was using when making Google sheets. So it looks like we figured what options differ the most between working with real footages and animation. That would be super_sharp and divide at least, since we get very different results for them. Possibly overlap and DCT too. Also it always sets maskScale to 1, even though with the default 100 it looks a little better. |
||||
31st October 2018, 23:31 | #94 | Link | ||||||
Registered User
Join Date: Mar 2018
Posts: 447
|
Quote:
I'm going to have to do a similar experiment downscaling the result before SSIM comparison and comparing that to non-downscaled SSIM. Perhaps it's always better to downscale. Quote:
Quote:
Quote:
Quote:
Also I want to ask if the results on one tab are from a single run or from multiple runs. There are over 10000 results in full1 and full2 so I'm guessing they consist of multiple runs. Full2 doesn't have the scaleCSAD parameter, was that your first script variation? It would help the analyzing if you posted the original log files. Quote:
[EDIT] Forgot to ask, which algorithm are you using when running the optimization? If it's still "mutation" I recommend you try the default "SPEA2" because with thousands of iterations it gets better results. Last edited by zorr; 31st October 2018 at 23:43. Reason: ScaleCSAD mystery solved |
||||||
1st November 2018, 08:22 | #95 | Link | ||
Registered User
Join Date: Sep 2010
Location: Russia
Posts: 85
|
Quote:
Quote:
I can figure out the script for single upsampling, thanks. About negative values, when I was introducing negative badRange somehow the notion of -50..50 didn't work, so I wasn't trying it anymore (well it would actually be a wrong way for that particular parameter). Try and see if negative ranges work for you with scaleCSAD. As I mentioned, each tab consists of log values from 3 runs of 3000 iterations together. The leftmost column has all the log values, I only removed the header and sorted by SSIM. Yes, Full2 was one of the earliest. I've switched to SPEA2 around time when MVtools bugs were discussed, so I'm using that now. |
||
1st November 2018, 21:40 | #96 | Link | ||
Registered User
Join Date: Mar 2018
Posts: 447
|
Quote:
If one does the MCompensate correction after MFlowFPS the most correct vectors should produce better image quality even according to SSIM. But to do that requires that both processes are optimized at the same time roughly doubling the number of optimized parameters. Or perhaps it could be done in turns, first optimizing MFlowFPS only, then optimizing MCompensate and then optimizing MFlowFPS again. Sorry if this doesn't make much sense, I need to make a thread about that technique... Quote:
There must be something else there as well, full1 has 10688 results and full2 has 14624 results. Perhaps you ran them with the time limit? |
||
2nd November 2018, 21:37 | #97 | Link | ||
Registered User
Join Date: Sep 2010
Location: Russia
Posts: 85
|
Quote:
Quote:
|
||
3rd November 2018, 14:34 | #98 | Link |
Registered User
Join Date: Dec 2005
Location: Germany
Posts: 1,795
|
What am I doing wrong? I tried different names but I always get this error msg:
Code:
Found following optimizable parameters: # optimize tr = _n_ | 1..4 | tr found 1 parameters to optimize Running SPEA2 java.lang.Exception: Could not update parameter value for [tr = _n_] at avisynthoptimizer.Parameter.getLine(Parameter.java:617) .... Code:
TEST_FRAMES = 10 # how many frames are tested MIDDLE_FRAME = 50 # middle frame number ffms2("E:\cut.mkv").ConvertBits(8) source = last last=source.AddGrain(80, 0, 0, seed=2) tr =1 # optimize tr = _n_ | 1..4 | tr denoised = TemporalDegrain2(degrainTR=tr) last = denoised # calculate SSIM value for each test frame global total = 0.0 global ssim_total = 0.0 FrameEvaluate(last, """ global ssim = SSIM_FRAME(source, denoised) global ssim = (ssim == 1.0 ? 0.0 : ssim) global ssim_total = ssim_total + ssim """) # measure runtime, plugin writes the value to global avstimer variable # NOTE: AvsTimer should be called before WriteFile global avstimer = 0.0 AvsTimer(frames=1, type=0, total=false, name="Optimizer") # per frame logging (ssim, time) delimiter = "; " resultFile = "D:\AvisynthRepository\AvisynthOptimizer-0.9.8-beta\perFrame.txt" # output out1="ssim: MAX(float)" out2="time: MIN(time) ms" file="D:\AvisynthRepository\AvisynthOptimizer-0.9.8-beta\perFrame.txt" WriteFile(resultFile, "current_frame", "delimiter", "ssim", "delimiter", "avstimer") # write "stop" at the last frame to tell the optimizer that the script has finished frame_count = FrameCount() WriteFileIf(resultFile, "current_frame == frame_count-1", """ "stop " """, "ssim_total", append=true) # NOTE: must return last or FrameEvaluate will not run return last #Prefetch(0)
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth VapourSynth Portable FATPACK || VapourSynth Database |
3rd November 2018, 20:06 | #100 | Link |
Registered User
Join Date: Dec 2005
Location: Germany
Posts: 1,795
|
Ahhhh didn't know what, thx, works now.
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth VapourSynth Portable FATPACK || VapourSynth Database |
Thread Tools | Search this Thread |
Display Modes | |
|
|