View Single Post
Old 7th February 2019, 19:49   #70  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
Quote:
Originally Posted by kolak View Post
Probably every video when measured for peak brightness should actually go through bit of low pass filtering. This is what some tools do (eg. Cortex).
1 pixel with 2K nits doesn't really mean much, does it?
If you have bright blue stars in 4K RGB, the peak nits can be reduced a fair amount in a conversion to Y'CbCr 1080p. the brightest single pixel could come out with less than half the initial nits in some edge cases. Compression itself could reduce that farther.

The spec for static metadata (MaxFALL and MaxCLL) requires that the calculations be done in RGB, even though HDR content is always delivered in 4:2:0. It could be argued that metadata should be done based on the highest bitrate for the highest resolution encode, since that's the largest actual values you'd get, and more conservative values will allow more aggressive use of a panel's actual abilities.

However, tone mappers could theoretically use knowledge of the intended values to try to reconstruct those values in tone mapping. I don't know if any do it.

This stuff gets quickly complicated, which is why all good HDR tone mappers required the efforts of many PhDs. Clear specs on what the data is supposed to represent are so essential, and often much less obvious that it seems at first glance.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is online now   Reply With Quote