Thread: Avisynth+
View Single Post
Old 16th April 2018, 03:23   #4035  |  Link
`Orum
Registered User
 
Join Date: Sep 2005
Posts: 178
I'm curious, what's the rationale behind fulls=false as the default for YUV when using ConvertBits()?

Edit: I ask mainly for up-converting, as e.g. white in 8 bit (255) is no longer truly white when in 10+ bit without fulls=true, it's just very close to true white (i.e. UCHAR_MAX << (BitsPerComponent() - 8)).
__________________
My filters: DupStep | PointSize

Last edited by `Orum; 16th April 2018 at 03:36.
`Orum is offline