Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 14th October 2018, 23:24   #81  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
Quote:
Originally Posted by Seedmanc View Post
So, after much suffering (because nothing helped against the crashes
...
it's the combination of pel 4, blocksize 8 with overlap 4, divide > 0 with large search radius and (surprise!) removal of ConvertToYV24.
...
Further testing revealed that for pel 4 it is enough to have search radius of 4 to cause error, with pel 2 it takes around 12 and I couldn't reproduce it for pel 1.
I was able to reproduce this even with my low resolution source. It needs to be (or converted to) YV12 though, even YUY2 works. With my slightly older MVTools it also crashes with super_pel=2, but not 1.

Quote:
Originally Posted by Seedmanc View Post
While I admit that having a search radius larger than block size seems strange
I don't think the block size affects the search radius. The block size tells how large blocks the algorithm is trying to track, while the search radius tells how far the block can move between frames. I guess the practical upper limit for the search radius is the width of the frame (or height, if that one is larger). [EDIT] Actually it's the diagonal of the frame, Sqrt(width*width + height*height). And here we're assuming the radius is defined in pixels, which it is not in every search algorithm.

Quote:
Originally Posted by Seedmanc View Post
What's more weird is that unlike the chroma subsampling violation it does not necessarily raise an error right away, sometimes it happens in the middle or at the end of the script, sometimes it's not the MT error but some random access violation.
I found a MVTools2 bug earlier which was only triggered every 10th or 20th run of the script, reported here.

Quote:
Originally Posted by Seedmanc View Post
I don't even know how to report it, the thread has been abandoned for months.
I guess pinterf is very busy but he did respond to my bug reports swiftly, I think the deciding factor there was that he was able to reproduce the problem. So in this case with almost 100% error rate I think he can find the issue pretty soon.

Quote:
Originally Posted by Seedmanc View Post
I guess more filters/minmaxes are in order, but with the current notation it's hard to figure out how to write them. Set divide to 0 when blocksize is 8 and overlap 4 and pel > 1.
Certainly possible but I would do that only as last resort, let's give Pinterf a chance to do a fix first. With the details you figured out your error report would be very good. Getting rid of the reverse polish notation is on my todo list.

Quote:
Originally Posted by Seedmanc View Post
In a way, Optimizer can be used as an automated plugin testing tool, since it tries out so many parameter combinations and reveals all kinds of bugs and readme inconsistencies.
I agree, I have found two bugs from MVTools using the optimizer, both were fixed. Perhaps I should also add a mode to the optimizer where you give it a failing script and it tries to figure out all the parameter combinations that trigger the error.

Another fun (or not) idea that is already possible would be to find the slowest possible parameter combinations... perhaps combined with the worst quality too!

Quote:
Originally Posted by Seedmanc View Post
Also, zorr, you might want to link you large explanative posts from this thread in the first post, now that the discussion took off it'll be more difficult to find them later.
Thanks for the suggestion, I have updated the first post.

Last edited by zorr; 14th October 2018 at 23:38. Reason: More about the maximum search radius
zorr is offline   Reply With Quote
Old 17th October 2018, 23:19   #82  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
AvisynthOptimizer v0.9.6-beta released. This version has improvements to mutation algorithm. It now supports sensitivity estimation, has colored console output that is more like the other algorithms and has the correct number of script evaluations when using a fixed iteration count.

Seedmanc, do you mind if I create an MVTools2 bug report about the issue you encountered? Or would you like to do it yourself?

Last edited by zorr; 17th October 2018 at 23:20. Reason: Adjusted link text
zorr is offline   Reply With Quote
Old 18th October 2018, 09:18   #83  |  Link
Seedmanc
Registered User
 
Join Date: Sep 2010
Location: Russia
Posts: 85
zorr, yes, I would rather have you do it, please go ahead.

Does the new version still require full path to the avs script?

On another note, in the FRC thread you mentioned this:
Quote:
Oh and one more thing, I came up with a way to limit the ugly artifacts you often get with MFlowFPS when good motion vectors are not found. Basically you reconstruct the frame created with MFlowFPS using MCompensate and the original frames. You just need to find good parameters for that, which you can do with the optimizer. I will give an example of that later.
I'm not sure I understand this. if you can't get good vectors for MFlow then you don't have good vectors for any other tool anyway.
Seedmanc is offline   Reply With Quote
Old 18th October 2018, 12:52   #84  |  Link
pinterf
Registered User
 
Join Date: Jan 2014
Posts: 1,850
Quote:
Originally Posted by Seedmanc View Post
Finally, I wonder about the temporal parameter. The readme says it's incompatible with setMTmode, however the new mvtools have the MT parameter inbuilt and on by default, do we know if it should be disabled for temporal? Again, it doesn't raise errors, the output looks differently but then it also does look differently when disabling MT for temporal=false as well. Really, the readme should be updated there.
My MvTools2 fork was originated from 2.6.0.5 which had internal multithreading through avstp.dll. Internal multithreading served well when used in non-MT capable avisynth versions or scripts which could not be run in MT for some reason. So the parameter mt applies for internal multithreading.

The reason why "temporal" is not multithreading friendly is that it requires linear frame access: there is only a single internal buffer that holds the vectors from previous frame. Previous vectors are used _only_ if frame order is linear from MAnalyze point of view: the current frame number = previously analyzed frame number + 1. So the word "multithreading" here refers to the Avisynth-level multithreading schema.

MAnalyze automatically reports to work in MT_MULTI_INSTANCE mode under Avisynth+. Perhaps MAnalyze could adaptively report MT_SERIALIZED when temporal=true is set.
pinterf is offline   Reply With Quote
Old 18th October 2018, 23:34   #85  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
Quote:
Originally Posted by Seedmanc View Post
zorr, yes, I would rather have you do it, please go ahead.
Ok, I will do some more investigation and then create the bug report.

Quote:
Originally Posted by Seedmanc View Post
Does the new version still require full path to the avs script?
Yes, unfortunately. But now that you mentioned it I will take a look at how hard it would be to fix.

Quote:
Originally Posted by Seedmanc View Post
I'm not sure I understand this. if you can't get good vectors for MFlow then you don't have good vectors for any other tool anyway.
That's certainly true. But it just happens that MFlowInter and MFlowFps have an unfortunate looking failure case, which the MCompensate doesn't have. So even with (actually, especially with) bad vectors MCompensate will look better. Of course you can't use MCompensate to generate an inbetween frame, it can only recreate a complete frame using other frames.

But if we first create the inbetween frame with MFlowInter / MFlowFps and then use MCompensate to reconstruct that frame using two nearby frames something magical happens...



Here we see the original frame (orig), a reconstructed frame from MFlowInter (inter) and finally the MCompensated frame (final).
zorr is offline   Reply With Quote
Old 18th October 2018, 23:43   #86  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
Quote:
Originally Posted by pinterf View Post
The reason why "temporal" is not multithreading friendly is that it requires linear frame access: there is only a single internal buffer that holds the vectors from previous frame. Previous vectors are used _only_ if frame order is linear from MAnalyze point of view: the current frame number = previously analyzed frame number + 1.
Does this mean that previous vectors are sometimes used depending on the scheduling of cores, or does this condition never happen in practice when multithreading is enabled?
zorr is offline   Reply With Quote
Old 19th October 2018, 15:50   #87  |  Link
pinterf
Registered User
 
Join Date: Jan 2014
Posts: 1,850
It depends on scheduling.
Btw, new mvtools2 released, fixing an issue in MAnalyze which could cause artifacts also in MFlow*** at larger blocksizes/lambda.
pinterf is offline   Reply With Quote
Old 19th October 2018, 23:29   #88  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
Quote:
Originally Posted by pinterf View Post
It depends on scheduling.
Btw, new mvtools2 released, fixing an issue in MAnalyze which could cause artifacts also in MFlow*** at larger blocksizes/lambda.
Thanks! I used the latest version to test the crash issue Seedmanc found and it's still there. I created a bug report.
zorr is offline   Reply With Quote
Old 22nd October 2018, 22:44   #89  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
AvisynthOptimizer version 0.9.7-beta released.

The source file path (or any path) doesn't need to be an absolute file path anymore. This is implemented by calling SetWorkingDir() in the beginning of the script to set the working directory as the original script's directory.

Seedmanc, was this the issue you asked about, I may have misunderstood because you said "full path to the avs script"...

Thanks to Pinterf the crash issue is fixed in the latest MVTools2 version. I have run some 43 000 tests and found no crashes.

Last edited by zorr; 22nd October 2018 at 22:47. Reason: Misspelled word
zorr is offline   Reply With Quote
Old 29th October 2018, 20:44   #90  |  Link
Seedmanc
Registered User
 
Join Date: Sep 2010
Location: Russia
Posts: 85
By re-enabling MT and trimming the required 10-frame clip into a separate video I've managed to speed up things from around 150 iters per hour to a whole thousand, and I can finally see it converging to a more or less singular set of parameters overnight.

However, there are still problems. It seems the nature of SSIM makes preference of sharp lines to textures and fills, which makes the results very biased sometimes. For example, it always prefers pel=4 over pel=2, despite it being said in the readme that it's not necessarily better, especially considering the time penalty. Apparently pel=2 gives somewhat aliased edges, hardly noticeable by eye, but too important for the metric. Another problem is that it also prefers sharp=0 to sharp=2, even when it's clearly visible that the former looks considerably blurrier than the original video. Perhaps it is again due to the extra attention to lines, especially with the double upsampling method used here, the halos around edges become extra prominent. Though it's not just about SSIM, when comparing PSNR or VQM (the latter uses DCTs for comparison), using the MSU VQMT software, it was noticeable how the graphs align in parallel to each other, as if sharp=2 incurred a constant penalty in the metrics value, independent of the scene complexity. Another reason might be that I'm testing it on a 2D animation (or rather, 3D CGI which is cellshaded to look 2D), which means lots of very sharp edges with flat fills around them. In this situation, a mere half-pixel shift of an edge makes a lot more difference (relatively) than dozens of pixels away if the background is the same color. I can't be bothered to test if the problem is as strong on real footages, though.
What I tried to do, however is to obscure the influence of sharpness-related options by downscaling the video just before passing it to SSIM. 1/2 was not enough, but 1/4 by each side did the trick - sharpness or aliasing no longer affected the metrics, resulting in pel=2 and sharp=2 getting about the same share in the results pool as other values. When calculating total SSIM over entire video and comparing visually, it looks like the sharp option does not really affect the efficiency of frame interpolation in any way, meanwhile pel=2 actually looked somewhat better and got a better SSIM than pel=4. Not very significant on its own, but considering the speed difference with pel=4, important.

Among other troublesome parameters, there are also divide and overlap. The double upsample method used here causes the SSIM to always be higher for overlap=0 and no divide. Meanwhile overlap pretty much universally gives better SSIM and visuals when comparing directly to original frames, and divide sometimes looks better as well. I couldn't find a solution here, downscaling didn't help, nor I can explain what might be throwing SSIM off in that case. Really, how can SSIM of this (overlap 0) be higher than of this (overlap half)? Ok, I need to clarify the images here, I split the video by half in duration and stack up vertically so I only have to go through 5 frames manually instead of 10 when comparing. Then, on the left half is the video after double upsampling and the SSIM for it compared to the original frames, while on the right it's after single upsampling (how it should be) and compared to discarded frames (original video is 60 so I can drop half and still get a reasonable source framerate). As you can see, SSIM on the left is inversely proportional to the actual video quality as opposed to SSIM on the right. I suppose I'll have to fix overlap to half the blocksize in the script itself, but I'm disappointed it hates divide so much.

A few more notes, the Divide parameter should be marked with the D flag, since divide=2 isn't really any "more divided" than 1, just different modes. I also added padding parameter for MSuper and the new parameter scaleCSAD, added in 2.7, which seems to improve quality when set to positive value (and the optimizer indeed chooses the maximum value for it). However neither DCT, nor searchalgo or padding converge to any particular values even after 3000 iterations and unlocking dct=1 (I don't think I saw it choosing 1 at all). I'm going to try modifying the script so that it compares to the original discarded frames to get rid of the mistakes introduced by the double upsampling and see if it's gonna be better.
Here's a Google Spreadsheets link where I tried to analyze (for the lack of a better way) results from several 3run*3000iter trials with downscaling and without, comparing the distribution of divide, sharp and pel parameters. I couldn't quite figure out how to make use of the visualizer's groupby method, so I had to come up with my own.


zorr, I mean the need to provide full path to .avs when calling the optimizer even if they're in the same directory.
I'd like to request a way to only generate scripts for the best results of every run instead of the entire pareto front. Usually when AvsOptim finishes I end up manually comparing the run results (with the image setup of above) among themselves and with the handpicked best results from previous runs. As of now it requires a lot of manual parameter editing to match the run results reported by Evaluate mode.
Also, I wonder if it would be possible to manually provide one of the generated population members so that a new run could start with one handpicked best parameter set among others and perhaps try to improve on top of that. For example, the optimizer would use the values assigned to var in the script (before the # optimize part) as one of the population members. Does that even make sense?

Last edited by Seedmanc; 29th October 2018 at 20:49.
Seedmanc is offline   Reply With Quote
Old 30th October 2018, 02:04   #91  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
Quote:
Originally Posted by Seedmanc View Post
However, there are still problems. It seems the nature of SSIM makes preference of sharp lines to textures and fills, which makes the results very biased sometimes.
That could be the case. Our eyes are also very sensitive to sharp lines (much more than to textures) so perhaps that is just the way SSIM is supposed to work.

Quote:
Originally Posted by Seedmanc View Post
For example, it always prefers pel=4 over pel=2, despite it being said in the readme that it's not necessarily better, especially considering the time penalty. Apparently pel=2 gives somewhat aliased edges, hardly noticeable by eye, but too important for the metric.
I can confirm that, pel=4 is pretty much always the chosen one for the best result. But the pareto front does contain pel=2 because it gives a significant speed increase.

Quote:
Originally Posted by Seedmanc View Post
Another problem is that it also prefers sharp=0 to sharp=2, even when it's clearly visible that the former looks considerably blurrier than the original video. Perhaps it is again due to the extra attention to lines, especially with the double upsampling method used here, the halos around edges become extra prominent.
In my tests sharp=2 is usually the better one, but then again I haven't done a lot testing with animations. I did test a clip of "Frozen" where sharp=2 was again the winner.

Quote:
Originally Posted by Seedmanc View Post
Though it's not just about SSIM, when comparing PSNR or VQM (the latter uses DCTs for comparison), using the MSU VQMT software, it was noticeable how the graphs align in parallel to each other, as if sharp=2 incurred a constant penalty in the metrics value, independent of the scene complexity. Another reason might be that I'm testing it on a 2D animation (or rather, 3D CGI which is cellshaded to look 2D), which means lots of very sharp edges with flat fills around them. In this situation, a mere half-pixel shift of an edge makes a lot more difference (relatively) than dozens of pixels away if the background is the same color.
That sounds peculiar, perhaps something is not quite right with the script. Have you tried converting the video into an lossless avi format and using that as the source? In my experience the other source filters are not as reliable.

Quote:
Originally Posted by Seedmanc View Post
What I tried to do, however is to obscure the influence of sharpness-related options by downscaling the video just before passing it to SSIM. 1/2 was not enough, but 1/4 by each side did the trick - sharpness or aliasing no longer affected the metrics, resulting in pel=2 and sharp=2 getting about the same share in the results pool as other values.
I am not sure if that is the best way to handle this kind of situation though. Basically the optimizer is now blind to these parameters and cannot help you find the optimal values for those. You could perhaps have the same effect by disabling the optimization for these parameters and in the end manually try if the different values for pel and sharp have an effect on the final result.

There's also a SSIM variation called Multiscale SSIM (MS-SSIM) that actually calculates the SSIM using different scales. I think that would be a good improvement on the quality measurement. It might be possible to implement MS-SSIM using just an avisynth function.

Quote:
Originally Posted by Seedmanc View Post
When calculating total SSIM over entire video and comparing visually, it looks like the sharp option does not really affect the efficiency of frame interpolation in any way, meanwhile pel=2 actually looked somewhat better and got a better SSIM than pel=4. Not very significant on its own, but considering the speed difference with pel=4, important.
Can you clarify this? So was the SSIM for pel=4 better in the 10 frame part (earlier you said it prefers pel=4) but pel=2 was better when calculating for the whole clip? If you meant that then yes, this could happen because the short 10 frame segment may not be a good representation of the whole clip. The optimizer tends to "overoptimize" in a way, it only cares about this short segment it is given to work with and because it also optimizes for speed it tends to fine-tune the arguments (for example the search ranges) so that they just barely work. Perhaps it would be better to take the 10 frames from different parts of the script, even though it will increase the processing time because the algorithm uses at least 4 frames around the constructed frame. Also it might be a good idea to "loosen" the found parameters, for example using slightly larger search ranges for the whole clip.

Quote:
Originally Posted by Seedmanc View Post
Among other troublesome parameters, there are also divide and overlap. The double upsample method used here causes the SSIM to always be higher for overlap=0 and no divide. Meanwhile overlap pretty much universally gives better SSIM and visuals when comparing directly to original frames, and divide sometimes looks better as well.
My experience is different, optimal overlap is usually not zero (at least in both directions) and divide=2 was the optimal result in my latest run.

Quote:
Originally Posted by Seedmanc View Post
I couldn't find a solution here, downscaling didn't help, nor I can explain what might be throwing SSIM off in that case. Really, how can SSIM of this (overlap 0) be higher than of this (overlap half)?
Is the SSIM value for the both frames? In that case the bottom left picture might explain it, in my eye it looks a lot more garbled and deserves lower SSIM.

Quote:
Originally Posted by Seedmanc View Post
As you can see, SSIM on the left is inversely proportional to the actual video quality as opposed to SSIM on the right. I suppose I'll have to fix overlap to half the blocksize in the script itself, but I'm disappointed it hates divide so much.
As fancy as the optimizer is, it's just a tool with flaws. It may not find the optimal values when given too much freedom, so giving it stricter limits will probably help. Or give it more iteration counts, those always help. Actually I have some pretty encouraging results from my latest optimizer run with 50 000 iterations, I have *most* of the parameters now converged.

Quote:
Originally Posted by Seedmanc View Post
A few more notes, the Divide parameter should be marked with the D flag, since divide=2 isn't really any "more divided" than 1, just different modes.
Good point, I will try that too.

Quote:
Originally Posted by Seedmanc View Post
I also added padding parameter for MSuper and the new parameter scaleCSAD, added in 2.7, which seems to improve quality when set to positive value (and the optimizer indeed chooses the maximum value for it).
Oh, I just noticed I have been reading old documents of MVTools, didn't even know about this parameter!

Quote:
Originally Posted by Seedmanc View Post
However neither DCT, nor searchalgo or padding converge to any particular values even after 3000 iterations
Converging is not necessary though. If the parameter doesn't change the result then it really doesn't matter which value is used. But 3000 iterations is just probably not enough for them to converge (but I can tell you that 50 000 iterations will make searchalgo converge, in my case to value 6).

Quote:
Originally Posted by Seedmanc View Post
unlocking dct=1 (I don't think I saw it choosing 1 at all).
That was brave, unlocking dct=1. But it's definitely there when I looked at your Google spreadsheet, it's even the best result of full run 1.

Quote:
Originally Posted by Seedmanc View Post
I'm going to try modifying the script so that it compares to the original discarded frames to get rid of the mistakes introduced by the double upsampling and see if it's gonna be better.
A good idea. You can just remove the second pass and change which clip is used inside FrameEvaluate. Just double check that you're comparing correct frames.

May I ask you why you're doing this frame doubling to a video with 60fps rate?

Quote:
Originally Posted by Seedmanc View Post
Here's a Google Spreadsheets link where I tried to analyze (for the lack of a better way) results from several 3run*3000iter trials with downscaling and without, comparing the distribution of divide, sharp and pel parameters. I couldn't quite figure out how to make use of the visualizer's groupby method, so I had to come up with my own.
Thanks for that. It's high time I continue the documentation in this thread, especially the evaluation part. But that will have to wait for a little bit longer. I will hopefully have some time tomorrow to run some analysis with your data. Could you also share the script (or scripts) you used to create the logs, what will help especially with the groupby -functionality.

But just a quickie here, you could for example run

Code:
optimizer -mode evaluate -log "../scripts/script*.log" -groupby super_pel -vismode series
You should now see two lines which represent the pareto fronts with values pel=2 and pel=4.

Quote:
Originally Posted by Seedmanc View Post
zorr, I mean the need to provide full path to .avs when calling the optimizer even if they're in the same directory.
OK, I will have to investigate that. I have used a relative path without any issues.

Quote:
Originally Posted by Seedmanc View Post
I'd like to request a way to only generate scripts for the best results of every run instead of the entire pareto front. Usually when AvsOptim finishes I end up manually comparing the run results (with the image setup of above) among themselves and with the handpicked best results from previous runs. As of now it requires a lot of manual parameter editing to match the run results reported by Evaluate mode.
Very well, AvisynthOptimizer v0.9.8-beta is ready for download. The scripts-parameter now takes values "none", "pareto", "best", "bestofrun" in addition to "true" and "false" (these last two are equal to "pareto" and "none"). In your case you want the "bestofrun" option. The "best" will only create a script of the best pareto front result.

Quote:
Originally Posted by Seedmanc View Post
Also, I wonder if it would be possible to manually provide one of the generated population members so that a new run could start with one handpicked best parameter set among others and perhaps try to improve on top of that. For example, the optimizer would use the values assigned to var in the script (before the # optimize part) as one of the population members. Does that even make sense?
It does make sense algorithm-wise, I will have to see how to manage that with the libraries I'm using. Stay tuned...

Last edited by zorr; 30th October 2018 at 21:50. Reason: Fixed the name of the latest optimizer version
zorr is offline   Reply With Quote
Old 30th October 2018, 22:04   #92  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
Already mentioned in the post above but let's make it official:
AvisynthOptimizer v0.9.8-beta released.

This version adds new modes to -scripts parameter:
  • "none": do not write any script files (same as "false")
  • "pareto": write scripts from the pareto front (same as "true")
  • "best": write a script from the best pareto front result
  • "bestofrun": write scripts from the best result of each run (thanks Seedmanc for the suggestion)

There are also some improvements to groupby -functionality:
-multiple scripts with different values for a parameter can be analyzed
-maxgroups also works with parameter values given as a list (previously had to be a value range)
zorr is offline   Reply With Quote
Old 31st October 2018, 21:38   #93  |  Link
Seedmanc
Registered User
 
Join Date: Sep 2010
Location: Russia
Posts: 85
Quote:
I am not sure if that is the best way to handle this kind of situation though. Basically the optimizer is now blind to these parameters and cannot help you find the optimal values for those.
It's not really blind, I'm just removing the distractions. Super_sharp and super_pel affected sharpness and edge aliasing much more than the actual motion interpolation quality (as in vectors), therefore biasing the SSIM. At least for me it is more important to measure how well the vectors are determined when optimizing for MFlowFps, rather than how sharp the picture is. Therefore I'm removing the constant from the equation to better see the effect those parameters have (or not).
Quote:
Can you clarify this?
No, I mean that I compared the SSIM effect of super_sharp and super_pel after downscaling 4x for entire video, using the MSU VQMT tool that can plot the graph for various metrics and calculate average over entire video. The downscaling removed the bias for line sharpness and allowed me to evaluate the real influence of those parameters on the vector quality. As it turned out, super_sharp didn't really had any significant effect, regardless of video length, but pel=2 seems to be somewhat better. Here's how the graph looked without downscaling for super_sharp: image. That's the parallel alignment I was talking about, it's not due to some frame shift if that's what you meant.

Quote:
Is the SSIM value for the both frames? In that case the bottom left picture might explain it, in my eye it looks a lot more garbled and deserves lower SSIM.
It does deserve a lower SSIM but as the image shows, for the double upsample method it gets a higher SSIM instead - 0.9564 instead of 0.9555. Single-upsampling does no such error, for it SSIM correlates with the actual visual quality more often. Again, left half of the image is what your double-upsampling produces, and the SSIM on left is calculated for it against the original frames. On the right half is the result of single-upsampling after discarding half of the frames and the SSIM comparing to the discarded frames. The middle number is the ratio between the two, doesn't really mean anything.

Quote:
May I ask you why you're doing this frame doubling to a video with 60fps rate?
60 fps is the framerate of my test clip, I use it a lot since with that FPS I can drop half the frames and still end up with 30fps to upsample from while being able to compare to the discarded frames without having to do double-upsampling. But actually I can go higher that 60 too, since my monitor can show 120+. For now though I want to learn doing 30 to 60 at least.
https://pastebin.com/uWa1msZ1 - here's an average script I was using when making Google sheets.

So it looks like we figured what options differ the most between working with real footages and animation. That would be super_sharp and divide at least, since we get very different results for them. Possibly overlap and DCT too. Also it always sets maskScale to 1, even though with the default 100 it looks a little better.
Seedmanc is offline   Reply With Quote
Old 31st October 2018, 23:31   #94  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
Quote:
Originally Posted by Seedmanc View Post
It's not really blind, I'm just removing the distractions. Super_sharp and super_pel affected sharpness and edge aliasing much more than the actual motion interpolation quality (as in vectors), therefore biasing the SSIM. At least for me it is more important to measure how well the vectors are determined when optimizing for MFlowFps, rather than how sharp the picture is. Therefore I'm removing the constant from the equation to better see the effect those parameters have (or not).
Ah, I see it now. I bit surprising though that pel=2 would create better vectors. It just occurred to me that maybe it has larger effective search range, assuming that the search ranges are not scaled by pel. That certainly could explain the better vectors.

I'm going to have to do a similar experiment downscaling the result before SSIM comparison and comparing that to non-downscaled SSIM. Perhaps it's always better to downscale.

Quote:
Originally Posted by Seedmanc View Post
I compared the SSIM effect of super_sharp and super_pel after downscaling 4x for entire video, using the MSU VQMT tool that can plot the graph for various metrics and calculate average over entire video.
I have MSU VQMT downloaded but haven't tried it yet. I guess at least some of that functionality could be replaced with an Avisynth script and/or the optimizer with some tweaks. Are you using it primarily for comparing total SSIM of two clips and/or also to see how they change visually?

Quote:
Originally Posted by Seedmanc View Post
It does deserve a lower SSIM but as the image shows, for the double upsample method it gets a higher SSIM instead - 0.9564 instead of 0.9555.
Oh, but I meant that the 0.9564 frame was better. At least I can understand why SSIM would rate it such - it doesn't have a lot of higher frequency "noise" like the other picture.

Quote:
Originally Posted by Seedmanc View Post
Single-upsampling does no such error, for it SSIM correlates with the actual visual quality more often.
In that case (and because you do have a 60fps source) you should definitely only do a single upsampling. Do you need help with the script implementing that?

Quote:
Originally Posted by Seedmanc View Post
https://pastebin.com/uWa1msZ1 - here's an average script I was using when making Google sheets.
Ok thanks. I noticed you used range 0..4 for scaleCSAD but in the latest docs the valid values are -2, -1, 0, 1 and 2. Still your best results used 4 so it has to work... [EDIT] Just noticed you subtract 2 from the value, so it's just like in the docs. Nevermind. And I guess there's a bug with negative values?

Also I want to ask if the results on one tab are from a single run or from multiple runs. There are over 10000 results in full1 and full2 so I'm guessing they consist of multiple runs.

Full2 doesn't have the scaleCSAD parameter, was that your first script variation?

It would help the analyzing if you posted the original log files.

Quote:
Originally Posted by Seedmanc View Post
So it looks like we figured what options differ the most between working with real footages and animation. That would be super_sharp and divide at least, since we get very different results for them. Possibly overlap and DCT too. Also it always sets maskScale to 1, even though with the default 100 it looks a little better.
Also the blockSize, though I think the optimal one for that may differ for every video. I was wondering about your maskScale results too, in my tests the maskScale is very close to the maximum (actually I had to set the maximum higher for my latest run, it's one of those params without any official maximum value).

[EDIT] Forgot to ask, which algorithm are you using when running the optimization? If it's still "mutation" I recommend you try the default "SPEA2" because with thousands of iterations it gets better results.

Last edited by zorr; 31st October 2018 at 23:43. Reason: ScaleCSAD mystery solved
zorr is offline   Reply With Quote
Old 1st November 2018, 08:22   #95  |  Link
Seedmanc
Registered User
 
Join Date: Sep 2010
Location: Russia
Posts: 85
Quote:
Are you using it primarily for comparing total SSIM of two clips and/or also to see how they change visually?
Yes, that's how I use it mostly, before AvsOptim I was using it a lot to manually analyze the effect of each parameter. It has a lot of various metrics available.
Quote:
Oh, but I meant that the 0.9564 frame was better
I can't really say how it's better, isn't the quality of framerate upsampling determined by how correct the vectors are? And by correctness I mean how well they converge to a middle point in time between the two existing frames. In 0.956 case you can see that they don't really converge, instead of a single finger you see 3, meanwhile on 0.9555 it's more or less one. This is my primary measurement of visual quality, to see how well the fast-moving objects are treated.
I can figure out the script for single upsampling, thanks.
About negative values, when I was introducing negative badRange somehow the notion of -50..50 didn't work, so I wasn't trying it anymore (well it would actually be a wrong way for that particular parameter). Try and see if negative ranges work for you with scaleCSAD.

As I mentioned, each tab consists of log values from 3 runs of 3000 iterations together. The leftmost column has all the log values, I only removed the header and sorted by SSIM. Yes, Full2 was one of the earliest.
I've switched to SPEA2 around time when MVtools bugs were discussed, so I'm using that now.
Seedmanc is offline   Reply With Quote
Old 1st November 2018, 21:40   #96  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 246
Quote:
Originally Posted by Seedmanc View Post
I can't really say how it's better, isn't the quality of framerate upsampling determined by how correct the vectors are? And by correctness I mean how well they converge to a middle point in time between the two existing frames.
That's one way to look at it. But I know some people prefer the less correct (technically speaking) blending when the vectors are so bad that there are distracting artifacts. SSIM naturally doesn't know anything about the vectors, it's simply trying to figure out how similar two pictures are. But I totally understand your take on it and it's a valid point of view.

If one does the MCompensate correction after MFlowFPS the most correct vectors should produce better image quality even according to SSIM. But to do that requires that both processes are optimized at the same time roughly doubling the number of optimized parameters. Or perhaps it could be done in turns, first optimizing MFlowFPS only, then optimizing MCompensate and then optimizing MFlowFPS again. Sorry if this doesn't make much sense, I need to make a thread about that technique...

Quote:
Originally Posted by Seedmanc View Post
About negative values, when I was introducing negative badRange somehow the notion of -50..50 didn't work
Ok I will investigate.

Quote:
Originally Posted by Seedmanc View Post
As I mentioned, each tab consists of log values from 3 runs of 3000 iterations together.
There must be something else there as well, full1 has 10688 results and full2 has 14624 results. Perhaps you ran them with the time limit?
zorr is offline   Reply With Quote
Old 2nd November 2018, 21:37   #97  |  Link
Seedmanc
Registered User
 
Join Date: Sep 2010
Location: Russia
Posts: 85
Quote:
I need to make a thread about that technique...
I'll be looking forward to it. I tried it briefly now and it looks promising, it removes the artifacts like a charm. Though it also erases the moving objects out of existence too sometimes and it can't recreate the missing vectors in high-motion areas, it still looks cool. I usually average the backward and forward compensation together to smooth it out a bit.

Quote:
Perhaps you ran them with the time limit?
Yes, sorry, that's what I meant. I was doing 3 runs per 3 hours on average.
Seedmanc is offline   Reply With Quote
Old 3rd November 2018, 14:34   #98  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,467
What am I doing wrong? I tried different names but I always get this error msg:

Code:
Found following optimizable parameters:
  # optimize tr = _n_ | 1..4 | tr
found 1 parameters to optimize

Running SPEA2
java.lang.Exception: Could not update parameter value for [tr = _n_]
        at avisynthoptimizer.Parameter.getLine(Parameter.java:617)
....
Code:
TEST_FRAMES = 10		# how many frames are tested
MIDDLE_FRAME = 50		# middle frame number

ffms2("E:\cut.mkv").ConvertBits(8)
source = last
last=source.AddGrain(80, 0, 0, seed=2)
tr =1					# optimize tr = _n_ | 1..4 | tr
denoised = TemporalDegrain2(degrainTR=tr)
last = denoised

# calculate SSIM value for each test frame
global total = 0.0
global ssim_total = 0.0
FrameEvaluate(last, """
	global ssim = SSIM_FRAME(source, denoised)
	global ssim = (ssim == 1.0 ? 0.0 : ssim)
	global ssim_total = ssim_total + ssim	
""")	

# measure runtime, plugin writes the value to global avstimer variable
# NOTE: AvsTimer should be called before WriteFile
global avstimer = 0.0
AvsTimer(frames=1, type=0, total=false, name="Optimizer")

# per frame logging (ssim, time)
delimiter = "; "
resultFile = "D:\AvisynthRepository\AvisynthOptimizer-0.9.8-beta\perFrame.txt"	# output out1="ssim: MAX(float)" out2="time: MIN(time) ms" file="D:\AvisynthRepository\AvisynthOptimizer-0.9.8-beta\perFrame.txt"
WriteFile(resultFile, "current_frame", "delimiter", "ssim", "delimiter", "avstimer")

# write "stop" at the last frame to tell the optimizer that the script has finished
frame_count = FrameCount()
WriteFileIf(resultFile, "current_frame == frame_count-1", """ "stop " """, "ssim_total", append=true)


# NOTE: must return last or FrameEvaluate will not run
return last

#Prefetch(0)
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database || https://github.com/avisynth-repository
ChaosKing is offline   Reply With Quote
Old 3rd November 2018, 19:21   #99  |  Link
Seedmanc
Registered User
 
Join Date: Sep 2010
Location: Russia
Posts: 85
ChaosKing, spaces matter. In the template (after # optimize) it says tr = _n_, but in the parameter assignment (before # optimize) you're missing space after =.
Seedmanc is offline   Reply With Quote
Old 3rd November 2018, 20:06   #100  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,467
Ahhhh didn't know what, thx, works now.
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database || https://github.com/avisynth-repository
ChaosKing is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 09:46.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, vBulletin Solutions Inc.