Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
Thread Tools | Search this Thread | Display Modes |
|
21st April 2012, 15:51 | #2 | Link |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Yes, visibile quality loss
Whether or not you see it depends on many factors including the viewer and the type of content Deterioration is more visible on things like graphics (like titles), anime , or content that has clear crisp color borders to begin with - there will be blurring |
21st April 2012, 16:53 | #4 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
Have you ever tried redoing a DVD menu or titles in an RGB program like photoshop, AE, video editors, ... etc? . It deteriorates noticably, edges becomes blurry, even before using lossy compression. It's due to color model (colorspace) conversion - the color information is upscaled then downscaled. Other types of content, like fast moving live action, you typically won't notice unless you zoom in and go frame by frame |
|
21st April 2012, 16:59 | #5 | Link |
Registered User
Join Date: Dec 2002
Posts: 5,565
|
There may be rounding errors in the color conversions, but the chroma up- and downscaling can be lossless depending on the algorithm. If I use pointresize(), it's lossless. So it does not have to result in any blurriness.
|
21st April 2012, 17:10 | #6 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
or using some other chroma resampling method? If you have time, here is a test video from a DVD menu for you . (The goal was to redo the text animating on, but the reason is not important). It's really an almost worst case scenario, because of thin red text. Using "normal" ConvertToRGB.ConvertToYV12 routine (and then ConvertToRGB for the screenshot) . It's similar result in other programs which use slightly different chroma sampling methods Comparisons: Original YV12 (and RGB for screenshot) http://i40.tinypic.com/1zofcjp.png YV12=>RGB=>YV12 (and RGB for screenshot) http://i40.tinypic.com/348ivbo.png original .m2v http://www.mediafire.com/?52xp6d7p5881fvo Last edited by poisondeathray; 21st April 2012 at 17:15. |
|
21st April 2012, 17:32 | #10 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
ConvertToRGB32(chromaresample="point") ConvertToYV12(chromaresample="point") (and back to RGB , for consistency of the screenshots, I used same default ConvertToRGB() for the screenshot, which uses Bicubic in avisynth 2.6) http://i41.tinypic.com/15dtlcz.png Still some blurring - and introduces aliasing |
|
23rd April 2012, 06:16 | #11 | Link | |
Registered User
Join Date: Sep 2006
Posts: 1,657
|
Quote:
I'm trying to use neat video with avisynth plugins. I tried to load it as a plugin in an avs, but it requires a converttoRGB32 before that. |
|
23rd April 2012, 10:16 | #12 | Link | |
Avisynth language lover
Join Date: Dec 2007
Location: Spain
Posts: 3,431
|
Quote:
Code:
ConvertToYV24(chromaresample="point") MergeChroma(PointResize(width, height, 0, 1)) ConvertToRGB32() ... # filtering in RGB32 ConvertToYV12(chromaresample="point") |
|
23rd April 2012, 15:52 | #13 | Link | |
Registered User
Join Date: Sep 2006
Posts: 1,657
|
Quote:
And one more thing, if i ran neat video in virtualdub and output the result as lagarith lossless in YV12, before putting it into avs, will i get the same result as the avs conversion above? |
|
21st April 2012, 17:52 | #14 | Link |
Registered User
Join Date: Dec 2002
Posts: 5,565
|
I don't know where that error comes from. If you upscale using PointResize by factor 2 and downscale by factor 2 again, you get the same result. Either chromaresample="point" does not do what I think it does, or it has something to do with the MPEG-2-chroma-placement. Will try to make a test to reproduce the issue myself, but I stand by my original statement that this is algorithm dependent and it is possible to do it without additional blurriness, but not without color skewing (putting some edge cases aside).
|
21st April 2012, 17:57 | #15 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,377
|
Quote:
http://forum.doom9.org/showthread.php?p=1569907 On a practical level, this loss occurs, even in other programs that use "nearest neighbor" - so how do it without bluriness or other detrimental effets like aliased edges in any program??? You can use the YV12 mpeg2 clip provided above in the mediafire link as a test example Last edited by poisondeathray; 21st April 2012 at 18:00. |
|
22nd April 2012, 02:37 | #17 | Link |
Registered User
Join Date: Mar 2012
Posts: 10
|
yv12->yv24->yv12 can be lossless with pointresample of chroma, provided the algorithm is correct. avisynth's yv12->yv24 with chromaresample=point is rather incorrect.
Consider a set of 4 luma samples in 4:2:0. Place them on a coordinate plane with the top left one at (0, 0) and the bottom right one at (1, 1). So MPEG1 chroma is (0.5, 0.5), and MPEG2 chroma is (0, 0.5). Avisynth's yv24->yv12 chromaresample=point uses chromaloc (0,0). Due to the limitations of point resize, only luma cosited chroma is possible. This is ok. Avisynth's yv12->yv24 chromaresample=point uses chromaloc (1.5, 1.5) for MPEG1, and chromaloc (0.5, 1.5) for MPEG2. This is wrong and makes no sense at all. MPEG1 can be exactly represented at (0.5, 0.5), and MPEG2 can be better represented with either (0.5, 0.5) or (-0.5, 0.5). Using a correct MPEG1 yv12->yv24 would allow yv12->yv24->yv12 to be lossless. Last edited by natt; 22nd April 2012 at 02:43. |
22nd April 2012, 19:49 | #18 | Link |
Registered User
Join Date: Dec 2007
Location: Germany
Posts: 632
|
Thanks natt, I have wondered why RGB<->YV24 conversions were not lossless color wise using chromaresample=point - now I know why.
However, I'm not sure though and can't test it now but if memory serves me right there also was something fishy going on if doing an (interlaced) YV12<->YUY2/YV16 conversion using pointresize, meaning converting back and forth is not lossless as one would expect. |
22nd April 2012, 21:53 | #19 | Link | |
Avisynth language lover
Join Date: Dec 2007
Location: Spain
Posts: 3,431
|
Quote:
It is consistent with the way PointResize works, which is to use, for each output pixel, the nearest source pixel looking above and to the left (hence not strictly 'nearest' neighbour, as it ignores nearer pixels below or to the right). However, the effect is that YV12->YV24->YV12 using chromaresample="point" introduces a chroma shift. But, as the original pixels are still preserved in YV24 (just in the 'wrong' place), this can be corrected by shifting them back before reconversion to YV12. Thus, Code:
ConvertToYV24(chromaresample="point") MergeChroma(PointResize(width, height, 0, 1)) # mpeg2 # or MergeChroma(PointResize(width, height, 1, 1)) for mpeg1 ConvertToYV12(chromaresample="point") See also posts #284-290 of this thread. |
|
5th March 2015, 09:08 | #20 | Link | ||
Registered User
Join Date: Mar 2010
Posts: 79
|
Quote:
Also, does anyone by any chance know what method Sony Vegas uses when converting imported media to RGB in its own editing space? Quote:
Last edited by Kein; 5th March 2015 at 09:31. |
||
Tags |
colorspace conversion, quality, rgb32, yv12 |
|
|