View Single Post
Old 1st January 2010, 17:50   #21  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 42
Quote:
Originally Posted by Manao View Post
And even so, what legacy ? Is there an analog 1080i ? No. So there is no legacy to preserve here.
Broadcast, storage and interfaces usually does not support 1080p50/60. In other words, 720p60, 1080p30 and 1080i60 are the available options.

Especially for 24fps movie content, the case could be made for a 60i container. But it introduce an infinite list of possible screw-ups for engineers and content producers...
Quote:
Especially since the tradeoff is actually worse than people think. On the paper, interlaced may sound good : you get the full vertical resolution when there are no motion, and the full temporal resolution when it moves. So 1080p60 and 1080i60 are supposed to be comparable, with 1080i60 saving perhaps 25% bitrate after compression, and reducing the decoding needs.

That's on paper only. As it happens :[*]you don't get the full vertical resolution. Oh, sure, there are 1080 row of pixels, so when the video is still, you're supposed to look at a 1080p video. And you do. Except that video has been downpassed vertically, so you are actually looking at a content that only has 600 or so rows of pixels of actual information.
I have heard information to the contrary: SD interlacing includes a vertical lowpass filter, while HD interlacing should not. I do not claim to know this for a fact.

I would have guessed that a high-end interlacer could be content-adaptive, filtering only moving parts of the scene?

Anyways, using interlacing as an extra layer of lossy compression makes little sense. If interlacing is/was a good way of removing bits while keeping quality, then MPEG/ITU codecs would do interlacing internally on progressive signals. And then there would be complete end-to-end control of what had been done and how it should be converted. The same can be said about colorspace conversion and decimation, though.
Quote:
[*]You're sending an interlaced signal to the TV, so somebody has to deinterlace it. Guess what, deinterlacing isn't cheap. When accumulated with the cost for mbaff, I think we reach the computational cost of 1080p60. But I cheat a bit here, because for legacy purpose, you would have needed a deinterlacer for SD content (but not for HD)
Sony and Philips have invested heavily in deinterlacing. One might suspect that they have an interest in keeping legacy formats that other companies does not do equally well.

I remember reading a Philips paper in which they compared 25p, 50i and 50p when encoding as bitrate-constrained MPEG2. The conclusion was that MPEG2 + 50i + HQ deinterlacing had best rate-distortion characteristics. Perhaps because MPEG2 lacked deblocking?

Quote:
Now, I may be biased on the subject, and I might miss some arguments in favor to interlacing. But I don't see which ones.
There is a case for interlacing in sensors. If you are bandwidth/heat-constrained, then 60i may be better than 30p, especially if you can tailor OLPF and deinterlacing for the task. I would deinterlace as early as possible though.

There are some physical stuff in sensors that I do not understand very well. Integration time and bandwidth for instance.

Last edited by knutinh; 1st January 2010 at 17:57.
knutinh is offline   Reply With Quote