Quote:
Originally Posted by K.i.N.G
I only mentioned the fact that video editing and 3D rendering is done in linear space because that is an easy example.
|
Very little video editing is done in linear. It's mainly seen in VFX and film coloring. Tools like After Effects and Premiere can have particular projects set to run in 32-bit linear light, but it's not the default. I do a lot in linear myself, but that's more doing corrections and conversions, not creative work.
Quote:
Ahh, and there we go! So there is room for improvement.
They could adjust the adaptive quantizer's algorithm depending on what color space is selected.
And I'm convinced this could potentially increase efficiency/quality by quite a margin.
|
I'm not sure about "quite a margin" but it is a historically undervalued aspect of psychovisual optimization.
One could actually think of --aq-mode 3 as "--sdr-opt."
Quote:
Sure, but this specifically 'only' requires adjusting the encoder not the entire history of how video works/evolved.
|
Yeah. Although an encoder that took linear light into the quantization stage and then quantized based on the output color volume could be
awesome. It's always bothered me that we convert to final bit depth
before doing the frequency transform, even though the iDCT values get more bits and those bits don't have 1:1 mapping with pixels anyway.