View Single Post
Old 16th February 2018, 08:03   #21  |  Link
`Orum
Registered User
 
Join Date: Sep 2005
Posts: 178
Quote:
Originally Posted by qyot27 View Post
x264 is simply set up to always output 4:2:0 unless the user overrides it (even if the input wasn't 4:2:0). Normally, bit depth wasn't a part of it, because x264 only gained multi-bit support on December 24th, 2017. Despite now having 8-bit and 10-bit in a single build, multi-bit builds still default to 8bit rather than outputting in the same depth as the input, likely for the same reason you have to override it so it doesn't automatically convert your input to 4:2:0 - hardware compatibility. H.264-capable hardware players are almost always restricted to 8-bit 4:2:0.
What does chroma subsampling have to do with bit depth?

Anyway, my understanding of the "multi-bit" builds was that was only for selectable depth output, and hasn't had any effect on what x264 will take in. As long as I've used it, x264 with lavf or y4m support supports all depths that lavf / y4m support for output, and you can even feed it a full 16 bit video if you like. Whether or not this is a good idea depends on other factors, but I usually try to feed it the same depth I'm outputting to (usually 10 bit). Theoretically 16 bit would be better if it had internal support for it, but I'm unsure without looking at the code base.

The error that appears when I try to directly encode 10-bit output from avs scripts is vexing though: "avs [error]: not supported pixel type: YUV420P10"? I can only assume this is due to its avs demuxer not handling anything higher than 8 bit, while lavf/y4m handle it just fine.
__________________
My filters: DupStep | PointSize

Last edited by `Orum; 16th February 2018 at 08:08.
`Orum is offline   Reply With Quote