Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Announcements and Chat > General Discussion

Reply
 
Thread Tools Search this Thread Display Modes
Old 5th June 2008, 06:41   #1  |  Link
Vchat20
Registered User
 
Vchat20's Avatar
 
Join Date: May 2007
Location: Warren, Ohio
Posts: 12
Debate: Interlaced vs. Progressive

So this has been one mind-bender that has been nagging at me for a while and that has been the battle between Interlaced and Progressive content and the supposed 'need' to deinterlace said interlaced content from the getgo in many encoding situations.

Here's my take on it: If the original content is already progressive, there really isn't much to do there. And there really is no way to 'interlace' it back again since theorhetically you'd just end up with identical frames anyhow.

If the original content was interlaced on the other hand, you are at the maximum of quality in terms of retaining as much information as you can. Deinterlacing inherently loses information even with the best deinterlacing filters. In addition, most playback devices/software at least with good built in filters are capable of doing good deinterlacing on the fly during playback. Many software dvd codecs are a good example of this.

With this all said and done, why do you suppose it has long been such a habit of deinterlacing every piece of content regardless of destination? What are your opinions on the matter.

FYI: This is not saying 'OMFG you must stop deinterlacing', but more of a start of a debate as to why you think this is still commonplace.
Vchat20 is offline   Reply With Quote
Old 5th June 2008, 07:51   #2  |  Link
GodofaGap
Registered User
 
Join Date: Feb 2006
Posts: 823
I think deinterlacing became popular for three reasons:

- good deinterlacers were too hungry on the CPU to be be used in real-time back then and some new ones are now as well
- some codecs didn't have an interlaced mode (DivX 3.11 for example)
- a lot of times videos were resized to arbitrary sub-SD resolutions.

One method of deinterlacing doesn't throw away any information at all BTW: bobbing.

I agree with you that if your target is DVD on TV then deinterlacing doesn't make much sense. Also remember that a lot of 'deinterlacing' going on now in the AviSynth forum is actually trying to undo bad standard conversions.

Last edited by GodofaGap; 5th June 2008 at 07:54.
GodofaGap is offline   Reply With Quote
Old 5th June 2008, 08:06   #3  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
a lot of editors and directors want a "filmic" look (i don't like that word, but there aren't any others to use), so they deinterlace.

my personal opinion is if you want it to look like film, shoot it to look like film, or just cough up the dough and actually shoot film. a "film look" is not at all a post decision - it must be considered from before shooting starts. the video look comes from more than just the interlace.

as far as deinterlacing everything in the encoding world, it's probably because the majority of codecs don't support it, and a large amount of the delivery methods out there don't support it (you can't bob in youtube or flash).

i think video can look quite good and don't see the point in making something juddery and blurry/jumpy in the name of some impossible (in most video situations) film look.
__________________
sucking the life out of your videos since 2004
Mug Funky is offline   Reply With Quote
Old 5th June 2008, 08:46   #4  |  Link
Revgen
Registered User
 
Join Date: Sep 2004
Location: Near LA, California, USA
Posts: 1,545
When it comes to sports footage, deinterlaced (bob-deinterlacing anyway) is always better IMO. Even if information is lost or none gained at all, the smooth motion that results from deinterlacing is pleasing on the eyes. And since sports footage always has a lot of motion, real-time bobbers are often not up to par and the sharp diagonal lines that make up the court or field can be aliased quite easily using conventional deinterlacers. Using a slow motion-based bobber like MCBob combined with NNEDI often is the best way to make the sports footage look good when it's deinterlaced.
__________________
Pirate: Now how would you like to die? Would you like to have your head chopped off or be burned at the stake?

Curly: Burned at the stake!

Moe: Why?

Curly: A hot steak is always better than a cold chop.
Revgen is offline   Reply With Quote
Old 5th June 2008, 13:19   #5  |  Link
J_Darnley
Registered User
 
J_Darnley's Avatar
 
Join Date: May 2006
Posts: 957
You must deinterlace when watching on a progressive display or you see the combing. When you do it is up to you. You can do it in the decoder, in pre-processing, in the second decoder, in post-processing. I do it because I watch everything on my PC using a progressive CRT monitor on it. If I made a DVD or something to be shown on my TV then I may leave it interlaced.
__________________
x264 log explained || x264 deblocking how-to
preset -> tune -> user set options -> fast first pass -> profile -> level
Doom10 - Of course it's better, it's one more.
J_Darnley is offline   Reply With Quote
Old 5th June 2008, 19:48   #6  |  Link
Blue_MiSfit
Derek Prestegard IRL
 
Blue_MiSfit's Avatar
 
Join Date: Nov 2003
Location: Los Angeles
Posts: 5,988
Right, but you're still at the mercy of your DVD player / TV's processor. Who knows, it might do a dumb bob. Obviously if you're encoding a DVD it's best to leave it interlaced, but if you're encoding a file for playback on a PC connected to a progressive display, I find it necessary to encode 60p. This is one reason I really like 720p!
__________________
These are all my personal statements, not those of my employer :)
Blue_MiSfit is offline   Reply With Quote
Old 21st December 2009, 15:06   #7  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 42
Interesting (old) thread. Basically we have a set of limitations. Camera sensor. codec. Display device. pre/post processing.

You will never get a higher field-rate or spatial resolution than that offered by the sensor. You will never get to watch a higher frame/field-rate or resolution than that offered by your display device. And in a bandwidth-limited world, your quality may be limited by the lossy compression, codecs may be more efficient on p than i material.

I tend to see interlacing as an analog 2:1 perceptually motivated compression method. I dont really see its purpose in this digital era, except for legacy purposes. If you want to trade motion/resolution/bandwidth, then use a lossy digital codec that does it intelligently..

-k
knutinh is offline   Reply With Quote
Old 21st December 2009, 17:09   #8  |  Link
Dr.Khron
Registered User
 
Dr.Khron's Avatar
 
Join Date: Oct 2006
Location: Gotham City, USA
Posts: 389
I've recently been watching a lot of football at my buddy's house, it looks GOOD on a modern plasma TV.

Am I crazy, or does the live (US NFL) football coverage constantly switch back and forth bewteen interlaced and progressive?

It often seems like you are seeing progessive graphics overlayed on an interlaced video stream, and some video streams seem smoother then others.
Dr.Khron is offline   Reply With Quote
Old 22nd December 2009, 01:36   #9  |  Link
Dark Shikari
x264 developer
 
Dark Shikari's Avatar
 
Join Date: Sep 2005
Posts: 8,666
Quote:
Originally Posted by Dr.Khron View Post
I've recently been watching a lot of football at my buddy's house, it looks GOOD on a modern plasma TV.
Some channels are 720p60, like ESPN, for example.
Dark Shikari is offline   Reply With Quote
Old 22nd December 2009, 01:35   #10  |  Link
Blue_MiSfit
Derek Prestegard IRL
 
Blue_MiSfit's Avatar
 
Join Date: Nov 2003
Location: Los Angeles
Posts: 5,988
Yep, it's a real nightmare with live video - espeically when progressive graphics are overlaid on top of an interlaced video stream, with occasional cuts to a (probably) progressive slow-mo camera, and everything else, plus probably some upscaled SD - then toss in commercials!!

ROFL!

Once they start broadcasting 1080p60 this will all go away hehe

~MiSfit
__________________
These are all my personal statements, not those of my employer :)
Blue_MiSfit is offline   Reply With Quote
Old 22nd December 2009, 01:54   #11  |  Link
Blue_MiSfit
Derek Prestegard IRL
 
Blue_MiSfit's Avatar
 
Join Date: Nov 2003
Location: Los Angeles
Posts: 5,988
Exactly, and how nice is that? Especially considering their craptacular encoders

Is anyone actually broadcasting H.264 in the U.S.? Cable / Satellite/ OTA?

~MiSfit
__________________
These are all my personal statements, not those of my employer :)
Blue_MiSfit is offline   Reply With Quote
Old 22nd December 2009, 02:15   #12  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Quote:
Originally Posted by Blue_MiSfit View Post
Is anyone actually broadcasting H.264 in the U.S.? Cable / Satellite/ OTA?
Ever heard of DirecTV?
Guest is offline   Reply With Quote
Old 22nd December 2009, 02:16   #13  |  Link
kieranrk
Registered User
 
Join Date: Jun 2009
Location: London, United Kingdom
Posts: 707
Quote:
Originally Posted by Blue_MiSfit View Post
Exactly, and how nice is that? Especially considering their craptacular encoders

Is anyone actually broadcasting H.264 in the U.S.? Cable / Satellite/ OTA?

~MiSfit
NBC and ABC uplink feeds and probably a few others. Dish as well.
kieranrk is offline   Reply With Quote
Old 22nd December 2009, 11:45   #14  |  Link
Ghitulescu
Registered User
 
Ghitulescu's Avatar
 
Join Date: Mar 2009
Location: Germany
Posts: 5,769
While most sources are interlaced and most viewing devices are progressive - it's hard to answer.
NB: some progressive camcorders store their movies interlaced, for compatibility reasons. Probably the best example is Canon.
Ghitulescu is offline   Reply With Quote
Old 24th December 2009, 11:52   #15  |  Link
roozhou
Registered User
 
Join Date: Apr 2008
Posts: 1,181
Quote:
Originally Posted by Ghitulescu View Post
While most sources are interlaced and most viewing devices are progressive - it's hard to answer.
NB: some progressive camcorders store their movies interlaced, for compatibility reasons. Probably the best example is Canon.
And Sony.
It seems Sony's LCD TVs have quite good deinterlacer. I guess Sony will stick to producing interlaced Camcorders because it looks better on their LCD TV than others.
roozhou is offline   Reply With Quote
Old 24th December 2009, 19:01   #16  |  Link
tony uk
Registered User
 
Join Date: Sep 2002
Location: uk
Posts: 2
why pictures are better in uk

the reason picture is better here is because of the interlaced
content poor people in usa only have NTSC : never the same colour.
merry christmas from uk
tony uk is offline   Reply With Quote
Old 24th December 2009, 09:20   #17  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
Quote:
I tend to see interlacing as an analog 2:1 perceptually motivated compression method. I dont really see its purpose in this digital era, except for legacy purposes.
And even so, what legacy ? Is there an analog 1080i ? No. So there is no legacy to preserve here.

Quote:
If you want to trade motion/resolution/bandwidth, then use a lossy digital codec that does it intelligently..
Especially since the tradeoff is actually worse than people think. On the paper, interlaced may sound good : you get the full vertical resolution when there are no motion, and the full temporal resolution when it moves. So 1080p60 and 1080i60 are supposed to be comparable, with 1080i60 saving perhaps 25% bitrate after compression, and reducing the decoding needs.

That's on paper only. As it happens :
  • you don't get the full vertical resolution. Oh, sure, there are 1080 row of pixels, so when the video is still, you're supposed to look at a 1080p video. And you do. Except that video has been downpassed vertically, so you are actually looking at a content that only has 600 or so rows of pixels of actual information.
  • You only decode 30 frames per seconds instead of 60. Yeah, sure, but you get the privilege to decode mbaff. On the paper, it's 'just' a coding tool that doesn't increase computational decoding complexity. But it does. Strongly. So indeed, it won't reach the computational needs of 1080p60, but there isn't a 2:1 margin either.
  • You're sending an interlaced signal to the TV, so somebody has to deinterlace it. Guess what, deinterlacing isn't cheap. When accumulated with the cost for mbaff, I think we reach the computational cost of 1080p60. But I cheat a bit here, because for legacy purpose, you would have needed a deinterlacer for SD content (but not for HD)

Now, I may be biased on the subject, and I might miss some arguments in favor to interlacing. But I don't see which ones.
__________________
Manao is offline   Reply With Quote
Old 1st January 2010, 17:50   #18  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 42
Quote:
Originally Posted by Manao View Post
And even so, what legacy ? Is there an analog 1080i ? No. So there is no legacy to preserve here.
Broadcast, storage and interfaces usually does not support 1080p50/60. In other words, 720p60, 1080p30 and 1080i60 are the available options.

Especially for 24fps movie content, the case could be made for a 60i container. But it introduce an infinite list of possible screw-ups for engineers and content producers...
Quote:
Especially since the tradeoff is actually worse than people think. On the paper, interlaced may sound good : you get the full vertical resolution when there are no motion, and the full temporal resolution when it moves. So 1080p60 and 1080i60 are supposed to be comparable, with 1080i60 saving perhaps 25% bitrate after compression, and reducing the decoding needs.

That's on paper only. As it happens :[*]you don't get the full vertical resolution. Oh, sure, there are 1080 row of pixels, so when the video is still, you're supposed to look at a 1080p video. And you do. Except that video has been downpassed vertically, so you are actually looking at a content that only has 600 or so rows of pixels of actual information.
I have heard information to the contrary: SD interlacing includes a vertical lowpass filter, while HD interlacing should not. I do not claim to know this for a fact.

I would have guessed that a high-end interlacer could be content-adaptive, filtering only moving parts of the scene?

Anyways, using interlacing as an extra layer of lossy compression makes little sense. If interlacing is/was a good way of removing bits while keeping quality, then MPEG/ITU codecs would do interlacing internally on progressive signals. And then there would be complete end-to-end control of what had been done and how it should be converted. The same can be said about colorspace conversion and decimation, though.
Quote:
[*]You're sending an interlaced signal to the TV, so somebody has to deinterlace it. Guess what, deinterlacing isn't cheap. When accumulated with the cost for mbaff, I think we reach the computational cost of 1080p60. But I cheat a bit here, because for legacy purpose, you would have needed a deinterlacer for SD content (but not for HD)
Sony and Philips have invested heavily in deinterlacing. One might suspect that they have an interest in keeping legacy formats that other companies does not do equally well.

I remember reading a Philips paper in which they compared 25p, 50i and 50p when encoding as bitrate-constrained MPEG2. The conclusion was that MPEG2 + 50i + HQ deinterlacing had best rate-distortion characteristics. Perhaps because MPEG2 lacked deblocking?

Quote:
Now, I may be biased on the subject, and I might miss some arguments in favor to interlacing. But I don't see which ones.
There is a case for interlacing in sensors. If you are bandwidth/heat-constrained, then 60i may be better than 30p, especially if you can tailor OLPF and deinterlacing for the task. I would deinterlace as early as possible though.

There are some physical stuff in sensors that I do not understand very well. Integration time and bandwidth for instance.

Last edited by knutinh; 1st January 2010 at 17:57.
knutinh is offline   Reply With Quote
Old 5th January 2010, 16:16   #19  |  Link
2Bdecided
Registered User
 
Join Date: Dec 2002
Location: UK
Posts: 1,673
Quote:
Originally Posted by knutinh View Post
Anyways, using interlacing as an extra layer of lossy compression makes little sense. If interlacing is/was a good way of removing bits while keeping quality, then MPEG/ITU codecs would do interlacing internally on progressive signals.
Of course they don't, even though interlacing does (at least partly) achieve the gains it's supposed to. That's why it's used. It's not a conspiracy, and it's not a mistake - it actually works (i.e. gives better quality / lower bitrates). Even with H.264 (if the encoder handles interlacing well enough).

The other reason is that 1080 is a bigger number than 720 (and does look sharper on most TVs - even the previously common 768-line ones) - but the technology isn't out there to do 1080p50 yet, so you're stuck with interlacing.


It does make logical sense that packaging the (adaptive) interlacing and (adaptive) deinterlacing into the encoder should make it work better than externally - but it's more complexity: more tuning in the encoder; more work in the decoder. Has anyone ever done it?

Cheers,
David.
2Bdecided is offline   Reply With Quote
Old 5th January 2010, 18:23   #20  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 42
Quote:
Originally Posted by 2Bdecided View Post
Of course they don't, even though interlacing does (at least partly) achieve the gains it's supposed to. That's why it's used. It's not a conspiracy, and it's not a mistake - it actually works (i.e. gives better quality / lower bitrates). Even with H.264 (if the encoder handles interlacing well enough).
I am not suggesting that it is a conspiracy, I am using it as an argument that you are wrong :-) Can you offer some references that h264 with interlacing has better PSNR/SSIM/subjective quality than h264 without?

For your statements to be generally right, I think one would expect that compressing any original 1080p50 sequence at:
1)1080@50p, h264, X mbps
2)1080@50i, h264, X mbps
3)720@50p, h264, X mbps

Would (on average) be best for 2) for any bitrate X. I highly doubt that to be true, but I have read Philips white-papers suggesting that they could make make 2) true if they used:
A)Philips' advanced deinterlacing
B)MPEG2 without deblocking filtering
C)At constrained bitrates

I think that B) was suggested as an important explanation.

The standardization organs are competitive about compression gain. If integrating interlacing/deinterlacing in the codec resulted in improved PQ for a given bitrate and a given implementation cost, surely someone would suggest it, have it implemented in the standard?
Quote:
It does make logical sense that packaging the (adaptive) interlacing and (adaptive) deinterlacing into the encoder should make it work better than externally - but it's more complexity: more tuning in the encoder; more work in the decoder. Has anyone ever done it?
Things such as deblocking-filter and B-frames (framerate upconversion) have been integrated into codecs, even though they initially seem to have come from outside the codec. Reason seems to be that they had good PQ to bitrate/complexity ratios and they could do better inside the codec than outside.

I think that all sense indicates that if the source is progressive (not always true), then doing interlacing within the codec will give major benefits for image quality and possibly total complexity as opposed to doing it externally. Advanced deinterlacers do all kinds of "artificial intelligence" that they should not have to do given precise signalling on how the content was actually produced. Motion vectors could be jointly optimized for tracking motion and describing candidates for filling in lines, saving a lot of cycles and having the luxury of optimizing for the ground-truth in the encoder.


It might be that I/we are setting the wrong background for the discussion. 1080p50 is not generally the source, and if one made 1080p50 cameras, they would have worse noise-performance. If that is the case, then interlacing could be a reasonable technology in the camera to overcome sensor limitations. If that is the case, then it may be the case that deinterlacing in the camera to 1080p50 does not increase quality/bitrate sufficiently, but does increase complexity considerably. I dont know.

-k

Last edited by knutinh; 5th January 2010 at 18:53.
knutinh is offline   Reply With Quote
Reply

Tags
content, deinterlace, interlaced, progressive, quality

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 20:34.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.