View Single Post
Old 1st January 2010, 18:42   #23  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 42
Quote:
Originally Posted by Manao View Post
That's interesting. I must admit that I noticed there was a lowpassing when I took a 720p50 content and tried to turn it into a 576i50 one. It was unwatchable without (strong) lowpassing, thus I concluded that lowpassing was necessary for interlacing, and extrapolated the same was happening in HD. I was confirmed in that opinion when I read that document, that explains how a 2160p video was transformed into a 1080i one (a 540p field is created by averaging four lines), but perhaps that document is incorrect.
Thanks for the link! Repeating it here for the discussion:
Quote:
Originally Posted by ebu
a) Interlacing to 1080i
For interlacing, every second 2164p-frame was shifted vertically two lines downwards. After deleting the first two and last two lines in the frames that were shifted (and the last four lines for the frames that were not shifted) to get 2160 lines again, each frame was filtered to 540 lines by line averaging (using Shake’s Box Filter). Horizontal filtering from 3840 to 1920 columns was performed using Shake’s Sinc Filter to benefit in perceived sharpness from the “oversampled” master. The two 540-line fields where then weaved into one single 1080-line interlaced frame. This process resembles the process in any video camera performing interlace in the basic default ‘Field Integration Mode’ – i.e. like a 2160 line video camera sensor reading out the
average of the sensor’s line 1+2+3+4 to Field 1; line 3+4+5+6 to Field 2; line 5+6+7+8 to Field 1 etc.
http://sci.tech-archive.net/Archive/.../msg00026.html
Quote:
Originally Posted by jens
Normally, if you have interlaced scanning with a video (NTSC) signal,
and do field integration, you put lines 1+2, 3+4, 5+6... together for
the first half-videoframe, and 2+3, 4+5, 6+7... for the second half-
videoframe.

This leads to a somewhat lower vertical resolution because of the
interpolation, but more than 240 lines.

In frame integration you normally get lines 1,3,5... for the first
half, and 2,4,6 for the second. So you get a better vertical resolution
(no interpolation), but you loose half of the charges what results in
higher noise.

Jens
http://www.damtp.cam.ac.uk/lab/digimage/cameras.htm
Quote:
To complicate matters further, different cameras construct the two video fields in different manners. In some cameras the even field corresponds to the even lines of pixels in the CCD chip, and the odd field to the odd lines of pixels in the CCD chip.
...
Slightly better are cameras which produce an average of the even lines and the preceding odd lines for the even field, and the odd lines and the preceding even lines for the odd field.
To me it seems that both ways of producing interlaced content is feasible, and might(?) be found in different video cameras?

There will typically be an optical lowpass filter in front of the sensor that smears out details to some degree (have to show some respect to Nyquist) and optics seldomly have perfekt spatial frequency response either.
Quote:
Originally Posted by Manao
There have been a lot of studies regarding 1080ixx vs 1080pxx. AFAIK, all of them but one (EBU's) concluded 1080i was better. I don't really know what to make of that, and I would have liked to see 720pxx thrown in the lot too.
I think that finding a suitable "success metric" is going to be difficult and political.

Do you use PSNR, SSIM, or real people?

Do you average across all codecs and deinterlacer implementations (optimizing mean viewer experience), or only for some idealized reference implementation?

I think that 1080p60 with good lossy compression will be best on a quality vs bandwidth benchmark. But is that all? How do you factor in quality vs price?

BBC concluded that 720p was enough for the UK public as long as screen sizes did not go much beyond 50".

Last edited by knutinh; 1st January 2010 at 18:56.
knutinh is offline   Reply With Quote