View Single Post
Old 21st February 2015, 23:34   #48  |  Link
vivan
/人 ◕ ‿‿ ◕ 人\
 
Join Date: May 2011
Location: Russia
Posts: 643
I remember reading about it too. Probably keeping rgb between shaders is easier (from development point of view), even though it's not effective.

Quote:
Originally Posted by Arm3nian View Post
Also in games for example, 4xSSAA is 2x in the horizontal and 2x in the vertical. But it is still called 4x, because the result is 4x greater.
And some people call 4-tap filters (where 4 is radius) 8-tap (diameter) or even 64-tap (total pixels sampled). Cause bigger numbers are better

NNEDI is a 1-dimensional filter that doubles the resoulution.
In case on anamorphic video it's possible that it will run it in only one direction (e.g. 16:9 1440x1080 with doubling set to always). In other cases it runs it twice (1 in each direction).
So you should think not about resolution, but about directions. It's just that keeping separate settings for 2 directions doesn't make much sence, so they're combined.


About debanding.
RGB: if you check shader code, you'll see that it converts from rgb to ycbcr and then from ycbcr to rgb.

Subsampling: I believe that DX9 doesn't support writing into 16-bit subsampled textures, so... Emulating it with 2 textures is not only painful, but also will make it slower (+25% pixels to write, doesn't matter that they have less channels).
Actually I don't believe that it even supports subsampled 10/16-bit textures (this is probably the reason why we won't see 10-bit native DXVA in madVR).
So doing it after chroma upsampling is hell lot easier and doesn't impact performance.

Last edited by vivan; 21st February 2015 at 23:43.
vivan is offline   Reply With Quote