View Single Post
Old 19th December 2018, 18:38   #13  |  Link
abolibibelot
Registered User
 
Join Date: Jun 2015
Location: France
Posts: 46
So I tried my best with DePanStabilize, and exported the stabilized / interpolated footage : when watched in motion, the result of stabilization sucks, plain and simple. I used method=1 since method=0 produced unreliable results (and in many cases caused FrameSurgeon's interpolation of a bad frame to be blurrier than the original frame, even though the adjacent frames were seemingly well aligned, very puzzling), while method=2 was extremely slow and produced a wonky result. Problem is, with method=1 there seems to be no way of controlling the maximum level of vertical / horizontal correction. The only parameter which seems to have a significant effect is “cutoff” : apparently, setting it high (2 or more) restricts the correction to rapid shaking (high frequency), setting it low (0.5 or less) extends the correction to slower motion (low frequency). But the result is not consistent : with a value that seems to be enough to correct the (mostly vertical) shakiness caused by the camera defect (which is steady but not very fast : about 3 up-and-down bumps per second) it attempts to correct regular camera motion and hand-held induced shakiness way too much, adding huge borders which have a sort of “liquid” aspect (with the “mirror” option and “blur=30”). What's strange is that I don't see that effect when previewing in AVSPMod.
I also found out that it produces weird artifacts in some places : in particular, a small brilliant object, part of a bigger object that is not moving at all in the original footage, appears randomly split in half, like... the head of the Terminator T1000 when he gets shot in the face... (sorry, first image that comes to mind !), for a split second but repeatedly. Normally a stabilizer function should only shift position vectors, not change the actual content of the picture.
I tried using MDepan instead of DePanEstimate : much slower and not better, same kind of artifacts, just on different frames.

Running the interpolation filters with no prior stabilization produces an ugly result on some frames.
My only alternative, it seems, would be to use Deshaker, which has an excellent reputation of efficiency and reliability. But Deshaker only accepts RGB as input, while the interpolation filters (FrameSurgeon, Morph) only accept YUV, so I would have to 1) convert to RGB and run Deshaker 2) convert back to YUV and run the interpolation filters 3) convert to RGB again, render, import into the NLE software 4) render, convert to YUV for the final encoding. That means 4 YUV<=>RGB conversions... Surely the quality will suffer.
Is there really no other way of doing that ?
abolibibelot is offline   Reply With Quote