View Single Post
Old 5th January 2010, 18:23   #33  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 42
Quote:
Originally Posted by 2Bdecided View Post
Of course they don't, even though interlacing does (at least partly) achieve the gains it's supposed to. That's why it's used. It's not a conspiracy, and it's not a mistake - it actually works (i.e. gives better quality / lower bitrates). Even with H.264 (if the encoder handles interlacing well enough).
I am not suggesting that it is a conspiracy, I am using it as an argument that you are wrong :-) Can you offer some references that h264 with interlacing has better PSNR/SSIM/subjective quality than h264 without?

For your statements to be generally right, I think one would expect that compressing any original 1080p50 sequence at:
1)1080@50p, h264, X mbps
2)1080@50i, h264, X mbps
3)720@50p, h264, X mbps

Would (on average) be best for 2) for any bitrate X. I highly doubt that to be true, but I have read Philips white-papers suggesting that they could make make 2) true if they used:
A)Philips' advanced deinterlacing
B)MPEG2 without deblocking filtering
C)At constrained bitrates

I think that B) was suggested as an important explanation.

The standardization organs are competitive about compression gain. If integrating interlacing/deinterlacing in the codec resulted in improved PQ for a given bitrate and a given implementation cost, surely someone would suggest it, have it implemented in the standard?
Quote:
It does make logical sense that packaging the (adaptive) interlacing and (adaptive) deinterlacing into the encoder should make it work better than externally - but it's more complexity: more tuning in the encoder; more work in the decoder. Has anyone ever done it?
Things such as deblocking-filter and B-frames (framerate upconversion) have been integrated into codecs, even though they initially seem to have come from outside the codec. Reason seems to be that they had good PQ to bitrate/complexity ratios and they could do better inside the codec than outside.

I think that all sense indicates that if the source is progressive (not always true), then doing interlacing within the codec will give major benefits for image quality and possibly total complexity as opposed to doing it externally. Advanced deinterlacers do all kinds of "artificial intelligence" that they should not have to do given precise signalling on how the content was actually produced. Motion vectors could be jointly optimized for tracking motion and describing candidates for filling in lines, saving a lot of cycles and having the luxury of optimizing for the ground-truth in the encoder.


It might be that I/we are setting the wrong background for the discussion. 1080p50 is not generally the source, and if one made 1080p50 cameras, they would have worse noise-performance. If that is the case, then interlacing could be a reasonable technology in the camera to overcome sensor limitations. If that is the case, then it may be the case that deinterlacing in the camera to 1080p50 does not increase quality/bitrate sufficiently, but does increase complexity considerably. I dont know.

-k

Last edited by knutinh; 5th January 2010 at 18:53.
knutinh is offline   Reply With Quote