View Single Post
Old 21st February 2015, 23:57   #49  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
Quote:
Originally Posted by nevcairiel View Post
Deinterlacing has to be before chroma scaling, scaling interlaced chroma would be extra complexity and not make much sense, especially considering that GPUs prefer 4:2:0 in form of NV12 anyway.
Thanks, that makes me feel more confident in the placement of deinterlacing.

Quote:
Originally Posted by Arm3nian View Post
Ppi is pixel density. The definition of resolution is the amount of information/detail in an image. If one image has 4x the information as another, it is 4x the resolution. Why break up the relation between the horizontal and vertical and call it 2x the resolution, when doubling in both directions implies you get 4x the original. Makes no sense.
Because we are talking about spacial resolution. If an image has 4x the information as another, it is 2x the resolution. This is based on the definition of resolution used by everyone except digital camera makers who want to be able to say they quadrupled the resolution when they really only doubled it.

Using your nomenclature 2x the resolution implies an irrational scaling factor of square root of 2 or only doubling one dimension. It sounds great for people trying to sell new models of digital cameras but no one else talks about resolution this way.

I am describing a standard so everyone uses the same words to mean the same thing. madshi obviously agrees given image doubling and as this thread is about madVR options let us stick to the terminology madVR uses. Everyone else who works with digital image processing thinks madshi got it right so this should not cause confusion. When you scale an image in Photoshop to "200%" you get 4 times the number of pixels.

Quote:
Originally Posted by Arm3nian View Post
Also in games for example, 4xSSAA is 2x in the horizontal and 2x in the vertical. But it is still called 4x, because the result is 4x greater.
No it isn't, that is 2x SSAA. 4x SSAA uses 16 times the pixels. 2x SSAA does not only filter the vertical or horizontal dimension and you cannot have square root of two pixel sampling.

I have not been able to find a statement from AMD or Nvidia but this is representative of what I was able to find:
Quote:
SSAA, or Super-Sample Anti-Aliasing is a brute force method of anti-aliasing. It results in the best image quality but comes at a tremendous resource cost. SSAA works by rendering the scene at a higher resolution, 2x SSAA renders the scene at twice the resolution along each axis (4x the pixel count), 4x SSAA renders the scene at four times the resolution along each axis (16x the pixel) count, and 8x SSAA renders the scene at eight times the resolution along each axis (64x the pixel count). The final image is produced by downsampling the massive source image using an averaging filter. This acts as a low pass filter which removes the high frequency components that caused the jaggedness.
Quote:
Originally Posted by Warner306 View Post
Your chart appears logical to me. But I also question why the output would be converted to RGB, immediately converted back to YCbCr only to be converted back to RGB once again when image doubling. Are you sure this is correct?
I am fairly sure it is correct.

Quote:
Originally Posted by huhn View Post
you should ask madshi. but he clearly said it is converted back to YCbCr using BT 709.
It was looking for that quote but I couldn't find it. This is also relevant and changes image doubling in my chart.

Quote:
Originally Posted by madshi View Post
Quote:
Originally Posted by cyberbeing View Post
And to make sure I'm no longer confused, what is the result of the following?

640x360 4:2:0 -> 1920x1080
Chroma Upscaling = NNEDI3
NNEDI3 double Luma = Enabled
NNEDI3 double Chroma = Enabled
NNEDI3 quadruple Luma = Enabled
NNEDI3 quadruple Chroma = Disabled

  • Conversion from 4:2:0 YcbCr (320x180) to 4:4:4 RGB (640x360) with 'chroma upscaling' setting NNEDI3
  • Conversion from 640x360 4:4:4 RGB -> 640x360 4:4:4 YCbCr.
  • Y & CbCr channels doubled to 1280x720 4:4:4 YCbCr with NNEDI3
  • Y channel only doubled to 2560x1440

Is it then:
  • CbCr channels upscaled 1280x720->1920x1080 with 'image upscaling' setting
  • Y channel downscaled 2560x1440->1920x1080 with 'image downscaling' setting
  • Conversion from 4:4:4 YCbCr to RGB

Or is it:
  • CbCr channels upscaled 1280x720->2560x1440 with 'image upscaling' setting
  • Conversion from 4:4:4 YCbCr to RGB
  • 2560x1440 RGB -> 1920x1080 RGB with 'image downscaling' setting
It is the first ("CbCr channels upscaled 1280x720->1920x1080 with 'image upscaling' setting").
Quote:
Originally Posted by vivan View Post
About debanding.
RGB: if you check shader code, you'll see that it converts from rgb to ycbcr and then from ycbcr to rgb.

Subsampling: I believe that DX9 doesn't support writing into 16-bit subsamped textures, so... Emulating it with 2 textures is not only painful, but also will make it slower (+25% pixels to write, doesn't matter that they have less channels).
Actually I don't believe that it even supports subsampled 10/16-bit textures (this is probably the reason why we won't see 10-bit native DXVA in madVR).
So doing it after chroma upsampling is hell lot easier and doesn't impact performance.
Ok thanks again, that also makes sense. Debanding happens after conversion to RGB as well since it is converting from RGB.

Last edited by Asmodian; 22nd February 2015 at 00:43.
Asmodian is offline   Reply With Quote