View Single Post
Old 26th February 2015, 08:15   #756  |  Link
Moragg
Registered User
 
Join Date: Jun 2013
Posts: 22
Quote:
Originally Posted by cyberbeing View Post
XySubFilter does render subtitles at final video framerate (i.e. after SVP) at least with madVR. This will only improve the smoothness of animated tags though, like \move \t \fade. A lot of 'moving' typesetting nowadays isn't actually moving. Instead it's done static frame-by-frame with motion tracking based on absolute frame timestamps of the original video fps. There is no easy way to detect and reverse this process in the subtitle filter.

The workflow you are describing is beyond the scope of XySubFilter itself. Theoretically SVP developers could do this if they really wanted to though. It'd likely either involve writing a subtitle consumer+provider which sits between XySubFilter and madVR and performs SVP interpolation, or integrating SVP into a video renderer which supports a subtitle consumer. Otherwise, the only solution nowadays is to not use SVP interpolation at all, and just perform 'smoothmotion' frame-rate-conversion blending as supported by madVR and other renderers.

With xy-VSFilter you can already do something like this with video resolution subtitles, if you really want to use SVP. Install LAV Video. Install "FFDShow Raw Video Filter" only, and set to a merit of 00800002 in MPC-HC external filters. Set DirectVobSub(auto-loading version) to 00800003 in MPC-HC external filters. And you should end up with a graph like the following with SVP interpolating video+subtitles:
The lack of animated tags is indeed the issue I think I'll avoid interpolating normal subs though, its worse having all subs blurry than the (very rare) moving typesets annoying.

You did give me (a perhaps "easy") to implement idea: couldn't there be two subtitle renderers? One (xyvsfilter) pre-SVP to render only pixel art / the frame-by-frame typesetting, and one post-scaling (xysubfilter) to do all the rest?

It seems then the hard part (coding subtitle renderers) could be ignored, all it would need is filter(s) to decide which subs should be rendered when, probably based off duration of the subtitle.
Since the two subtitle renderers would have the same filter they don't need to communicate at all - and one would hope running two very similar things wouldn't cause any issues.
Moragg is offline   Reply With Quote