Quote:
Originally Posted by poisondeathray
Those are the common pixel formats supported by vmaf.VMAF.
|
is this for the VS flavor or for VMAF in general... where did you get this list ?
Quote:
Originally Posted by poisondeathray
But now it's clear you're using ffmpeg vmaf. Did you look at the ffmpeg log to see what other conversions were occurring ? There might be other stuff going on behind your back
|
since I wasn't sure whether the bitrate was an issue (for the VMAF calculation), I converted the main and ref clips to yuv444p (8bit) before passing them into ffmpeg libvmaf (by specifying the -pix_fmt)... ffmpeg VMAF will tell the format it uses to compare in the console output, for my main/ref clips (16bit/12bit) it defaults to yuv444p10le, but once u pass clips in as 8bit it uses that format...
the VMAF score whether using original bit depth, 12 bit, 10 bit or 8 bit for the main/ref clips was always ~
96.x (real test, not control)
Quote:
Originally Posted by poisondeathray
Sometimes ffmpeg can "mix" up frames, less often with I-frame formats. But if your x265 encode used long GOP, there is a higher chance of a mixup than if it used I-frames only. EXR sequence will be I-frame only
|
GOP size on the encode is a fixed 48 frames, fps is 24
Quote:
Originally Posted by poisondeathray
Look at the results WorBy has been posting . They all plateau off below crf 18 or so. crf16 has the same quality as crf 10 or crf 1 if you blindly believe VMAF. ie. Everything looks "the same" to VMAF at higher bitrate ranges. ie. It's not a useful metric for distinguishing higher quality - only for streaming lowish bitrate delivery ranges
|
well, or in other words:
those results could easily be interpreted that from a certain CRF on, the encode is
perceptually identical, which is the whole point of VMAF...
their samples are based on humans reporting perceived quality differences...