Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
27th August 2020, 23:01 | #22 | Link | ||
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,871
|
Quote:
Quote:
Game footage is actually pretty good for finding chroma edge cases, since it gets rendered per-pixel and anti-aliasing tech is far from perfect. HUD details like red text in a compass or something can have quite sharp chroma details. |
||
28th August 2020, 01:51 | #24 | Link | |
Registered User
Join Date: Oct 2012
Posts: 8,115
|
Quote:
for game footage i could just take rimworld make a screen it's extrem obvious because red text is used every where the esport titles should show this very easily to. these are not edge cases it's visible the whole time. for real world camera footage you nearly always have to edge cases but not everything is made with a camera. |
|
28th August 2020, 11:17 | #25 | Link | |
Angel of Night
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,560
|
Quote:
This might change now that screensharing has suddenly become something everyone is doing, instead of just for the rare business meetings. |
|
28th August 2020, 11:34 | #26 | Link |
Angel of Night
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,560
|
That's beside the point; 4:4:4 is about keeping all available chroma information, instead of throwing away 75% of it. For naturally captured videos, losing 75% of the chroma is essentially meaningless. Only digitally generated or highly manipulated video benefits from full chroma, so how it's captured and demosaic'd isn't going to matter.
|
28th August 2020, 22:05 | #27 | Link | |
Cary Knoop
Join Date: Feb 2017
Location: Newark CA, USA
Posts: 398
|
Quote:
I know the "fossil" attitudes in the video industry are still strong but it's time to do away with fixed-bit encodings, video vs data levels, interlacing (it is still done), and chroma subsampling. |
|
30th August 2020, 18:05 | #29 | Link | |
Registered User
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,425
|
Quote:
Even YouTube and Netflix et al. are encoding in YUV 4:2:0 video levels. Probably because the video decoding side has traditionally also been really poorly supported on computers. Until hardware decode became standard (very recently) no one payed attention to how to do things correctly. GPU drivers still cannot get basic standards correct all the time. This means there would have been terrible issues for a lot of people for a while after any change, so no one was ever willing to change. Edit: bandwidth costs money, even only in-device bandwidth... probably the real reason 4:2:0 is still standard. AV1 should take the opportunity to go pure full-range 4:4:4, but they won't.
__________________
madVR options explained Last edited by Asmodian; 30th August 2020 at 18:08. |
|
30th August 2020, 18:52 | #30 | Link | |
Cary Knoop
Join Date: Feb 2017
Location: Newark CA, USA
Posts: 398
|
Quote:
4:2:0 is simply a rather crude form of perceptual compression. By using 4:4:4 you give the encoder the full and much more flexible power on how to compress efficiently. |
|
31st August 2020, 04:27 | #31 | Link |
Registered User
Join Date: Oct 2012
Posts: 8,115
|
that's not what he means decoding 4:4:4 needs double the memory as 4:2:0 the image is now 2 times the size it is as it is.
and full range has other issues as long as YCbCr is used it can create out of range values this could be "fixed" with YCoCg or ICtCp. RGB is not an option. the real reason no one fixed this is 4:2:0 is good "enough" for these casual professionals that create charts like these: https://www.unravel.com.au/understanding-gamma |
31st August 2020, 04:44 | #32 | Link | |
Cary Knoop
Join Date: Feb 2017
Location: Newark CA, USA
Posts: 398
|
Quote:
It's a jungle out there though: sRGB, Rec709, BT1886. True gammas, gammas with a linear segment, camera gamma without linear segment. Last edited by Cary Knoop; 31st August 2020 at 04:49. |
|
31st August 2020, 18:45 | #33 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,871
|
Quote:
The only TRUE 4:4:4 content is computer rendered, where it is actually created at native RGB 4:4:4. Also, the human visual system is far more sensitive to edges in the luma domain than to chroma overall. While we have highly accurate color vision, we process it with much less spatial and temporal detail than basic black-and-white vision which is what most of our evolutionary ancestors only had. Luma is what keeps you from getting eaten by a tiger stalking you across a treeline. Chroma is what keeps you from eating unripe or poisonous fruit. But seeing precise color detail in something that's moving just isn't something we're wired for. We've got some decades of digital image processing under our belt, and we've consistantly seen that 4:2:0 delivers more visual value per bit than 4:4:4 when bitrate is contstrained. Chroma subsampling was part of JPEG, even, at the birth of this technology. Fortunately the misbegotten YUV-9 (one chroma sample per 4x4 block of luma) used in 90's codecs like Sorenson Video and Indeo died. That was definitively TOO supersampled. Especially with 320x240 video, which would have only 80x60 chroma samples. Colored text was a nightmare. Another classic problem was going from NTSC DV25, which used 4:1:1 subsampling (one chroma per four pixels horizontally) to DVD (4:2:0) which netted out 4:1:0 color, so one chroma per 4x2 block of pixels. Key was to do all motion graphics after DV25, rendering as 4:2:0. Natural images were okay-ish with 4:1:1, as long as all graphics were done with more precision. |
|
31st August 2020, 18:55 | #34 | Link | |
Cary Knoop
Join Date: Feb 2017
Location: Newark CA, USA
Posts: 398
|
Quote:
I wrote: the camera needs a very high-resolution sensor due to the Bayer (or other) sensor design Last edited by Cary Knoop; 31st August 2020 at 19:04. |
|
31st August 2020, 18:59 | #35 | Link | |
Cary Knoop
Join Date: Feb 2017
Location: Newark CA, USA
Posts: 398
|
Quote:
By leaving the question on how to do the best perceptual compression to the codec on a frame by frame basis you will get the best solution, not by some arbitrary constraint impacting every frame the same way before you even start the compression. Last edited by Cary Knoop; 31st August 2020 at 19:03. |
|
31st August 2020, 19:06 | #36 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,871
|
Quote:
Something thing people often miss is that a 4K sensor with only one color per "pixel" is actually LOWER detail than 4K with 4:2:0, since luma and chroma samples are colocated. going from a 4096x sensor to a 3840x file helps that a bit, even though they are both called "4K." There is some very interesting alchemy that goes into making a good source out of camera's native formats! |
|
1st September 2020, 19:13 | #38 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,871
|
Quote:
Oversampling is always a good thing. Having a Beyer source at the same resolution as a colocated output was okay with SDR output, as the camera's much higher dynamic range provided additional details. But a 4K Beyer sensor for 4K 4:2:0 HDR content can be suboptimal versus capturing more, since the output can use the captured dynamic range. I doubt it's that material in practice, though, since visually resolving individual 4K pixels is pretty impossible in moving video outside of specific test content. But supersampling can reduce noise some (although having smaller sensor elements adds per-element noise, so it can be pretty complex to model and predict). |
|
1st September 2020, 19:17 | #39 | Link | |
Cary Knoop
Join Date: Feb 2017
Location: Newark CA, USA
Posts: 398
|
Quote:
Here the link to the patent app: https://patents.google.com/patent/US...ttner&sort=new Personally I think it is a step in the wrong direction, sure, it may be that this configuration turns out to be superior dynamic range wise, but the demosaicing is more complex (and perhaps even theoretically "wrong"). What we really need is to eventually get rid of Bayer or other sensors that require demosaicing. Requiring demosaicing, just like interlaced video, chroma subsampling, fixed-bit code values, all those things are not features they are hacks. Last edited by Cary Knoop; 1st September 2020 at 19:34. |
|
2nd September 2020, 05:42 | #40 | Link |
Registered User
Join Date: Oct 2012
Posts: 8,115
|
i can "just" go and record a lossless 4:4:4 4K RGB game video.
or 1440p/1080p if my disk or CPU can't handle so much throughput and just do the old test of if subsampling actually improves image quality with the same bit rate? i know this has been tested with x264 back in the days but i can't remember x265. i mean we can talk a lot but test are the real thing. |
Tags |
2160p, bluray, staxrip, upscale, x265 |
Thread Tools | Search this Thread |
Display Modes | |
|
|