View Single Post
Old 18th April 2019, 03:57   #1  |  Link
sunshine
Registered User
 
Join Date: Apr 2019
Posts: 17
Workflow from VapourSynth to DaVinci Resolve

Hello -

I'm working on an archival/ restoration project. I have a ton of MiniDV and VHS which I've captured to my PC via FireWire (60 hours approx).

The video is all BFF interlaced, 720x480, 30000/1001 fps, yuv411 8bit dvsd in AVI.

These are home videos shot with poor lighting, incorrect white balance, hand-held, etc., so I'd like to do some color correction and stabilization to make them more enjoyable to watch on a modern TV or mobile device. All playback will be done by a PC/ mobile device.

I own Davinci Resolve, which I plan to use for editing, color grading, stabilizing, and so on, but the deinterlacing and scaling in Resolve compares poorly to what I've seen with VapourSynth.

Here's what I'm thinking I'll do to deinterlace and resize the video. I'm looking for feedback on this, as I'm very new to this stuff.

Code:
#!/usr/bin/env python

# import the needed modules
import vapoursynth as vs
import havsfunc as haf
import edi_rpow2 as edi

core = vs.get_core()

# read the clip, it's passed as a param named "INVID" by vspipe
clip = core.ffms2.Source(source=INVID)

# I believe this converts the clip colorspace from yuv411 to yuv420, at 8 bit depth, as required for QTGMC.
clip = core.fmtc.resample (clip=clip, css="420")
clip = core.fmtc.bitdepth (clip=clip, bits=8)

# Use QTGMC to deinterlace - Are these good settings for my use-case??  These are your standard "home video" types of videos
clip = haf.QTGMC(clip, Preset='Slower', TFF=False)

# Use nnedi3_rpow2 to scale 2x
# from 720x480 to 1440x960
clip = edi.nnedi3_rpow2(clip=clip, rfactor=2)

# I believe this resizes to 1440x1080, with 4:3 DAR, with yuv422p8 colorspace and 709 color matrix.  This part is way over my head, and I'd appreciate feedback
# Also, I'm scaling from 1440x960 to 1440x1080.  Is this distorting my image (stretching its height)?  Or is this compensating for 4:3 pixel AR, because I gather it's being output in a 1:1 pixel AR now?  I'm lost on this one. 
clip = core.resize.Spline36(clip, 1440, 1080, format=vs.YUV422P8, matrix_in_s='709')

clip.set_output()

As far as where this is being sent next, it's ffmpeg to encode into x264, crf 10, veryslow, as follows.

Code:
vspipe --y4m ./DeinterlaceAndScale.py -a "INVID=$INVID" - | ffmpeg -nostdin -i pipe: -i "$INVID" -pix_fmt yuv422p -c:v libx264 -preset veryslow -crf 10 -tune fastdecode -x264opts keyint=1 -c:a copy -map 0:0 -map 1:1 "$OUTVID"
This intermediate output is my input to DaVinci Resolve. I COULD do x264 CRF 0 lossless, but the file sizes and processing overhead are a little ridiculous.

From DaVinci Resolve, I'll output likely dnxhr-sq for archival, and x264 CRF 20 for posting to Youtube, using on Plex, and so on.

I'd love some expert feedback on this workflow. Am I doing anything obviously wrong?

Thanks
sunshine is offline   Reply With Quote