View Single Post
Old 22nd February 2015, 11:30   #55  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Quote:
Originally Posted by Asmodian View Post
It means there are twice as many pixels, both horizontally and vertically; you can resolve details that are half the size.



When analyzing information the purpose of using the term "resolution" instead of "samples" is to signify the relative spacing of samples in a data set. In a 1D set twice the resolution is twice the samples, in a 2D set twice the resolution is 4 times the samples, in a 3D set it is 8 times the samples, 4D 16, 5D 32, etc. If you want to discuss the total number of samples instead use "samples", "pixels", or similar.

If you have twice the resolution you expect to be able to resolve details at half the size. For a 2D space you need four times the number of pixels to resolve details at half the size. e.g. If the smallest filament you can resolve with an imaging system is 0.1 mm in diameter and you need to be able to resolve one with a diameter of 0.05 mm you need a sensor with double the resolution. It is not useful to raise the change in resolution required to the power of the number of possible axes when discussing a needed change in sampling. It simply removes meaning. Saying you need 8 times the resolution to resolve a 3D object at half the size is pointless, why not say you need 8 times the samples if that is how you want to express it?

Resolution is a term used to describe data sets which are both a type of information themselves and contain information. The relative change in resolution equals the relative change in the number of samples only when talking about a data set with one dimension.

At this point I think I have expressed my option on the matter as well as I can.
I agree on all points. I think the problem we have is that 'resolution' on its own is a very ambiguous term. Are we talking about image resolution, pixel resolution, spatial resolution, or even resolution that has nothing to do with digital pixels like my spectrum analyzer example.

Nvidia most likely uses 4x in the DSR to show performance impacts, as the gpu will render 4x the amount of pixels. For photography and video I see why 2x would make sense. 4x when talking about scaling might give the wrong impretion on expected quality. The size of the image is important to consider. It would be to the benefit of all if everyone was more descriptive, but as you already know, it is hard to get an entire industry to agree on something.
Arm3nian is offline   Reply With Quote