Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 21st February 2015, 16:10   #41  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 752
Quote:
Originally Posted by Arm3nian View Post
Maybe madshi can explain why he calls it image doubling, I'm sure he has his reasons. Probably because to properly double an image, you would have to change the horizontal and vertical together, it would make no sense to say I doubled a 1920x1080 image to 3840x1080 even though the amount of pixels is 2x.
To 'properly' double an image in that sense is impossible since √2 is irrational. Anyway, in almost all cases it is easier to talk about the scaling factor instead of the ratio of pixels. This also corresponds more closely to the technical meaning of resolution which is measured in pixels per inch.
Shiandow is offline   Reply With Quote
Old 21st February 2015, 16:19   #42  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,965
Quote:
Originally Posted by Asmodian View Post
I did a little testing and chroma scaling still has to be done after (or before?) deinterlacing. I also know deinterlacing uses DXVA for video mode and it would make sense for it to accept 4:2:0 as that is the most common. Doing chroma scaling before deinterlacing doesn't make sense. MadVR will not deinterlace RGB sources.

But to be honest I do not feel all that confident about the placement of deinterlacing and debanding.
debanding works best before scaling same goes for chroma scaling so it makes only sense that it is done before chroma scaling.

you could add IVTC to deinterlacing. and you my add an comment that force film mode doesn't work with native DXVA
huhn is offline   Reply With Quote
Old 21st February 2015, 16:38   #43  |  Link
sheppaul
Registered User
 
Join Date: Sep 2004
Posts: 146
Quote:
Originally Posted by Arm3nian View Post
Maybe madshi can explain why he calls it image doubling, I'm sure he has his reasons.
It's from intrinsic properties of Nnedi3 filter. The filter is designed for that purpose.

Quote:
http://avisynth.nl/index.php/Nnedi3
nnedi3 is also very good for enlarging images by powers of 2,
sheppaul is offline   Reply With Quote
Old 21st February 2015, 19:32   #44  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Quote:
Originally Posted by Shiandow View Post
To 'properly' double an image in that sense is impossible since √2 is irrational. Anyway, in almost all cases it is easier to talk about the scaling factor instead of the ratio of pixels. This also corresponds more closely to the technical meaning of resolution which is measured in pixels per inch.
Ppi is pixel density. The definition of resolution is the amount of information/detail in an image. If one image has 4x the information as another, it is 4x the resolution. Why break up the relation between the horizontal and vertical and call it 2x the resolution, when doubling in both directions implies you get 4x the original. Makes no sense.

Also in games for example, 4xSSAA is 2x in the horizontal and 2x in the vertical. But it is still called 4x, because the result is 4x greater.
Quote:
Originally Posted by sheppaul View Post
It's from intrinsic properties of Nnedi3 filter. The filter is designed for that purpose.
Well power of 2 can mean anything. Maybe 2^2 (4x) is reffered to as doubling and 2^4 (16x) is quadrupling, based on the amount of pixels they give.
Arm3nian is offline   Reply With Quote
Old 21st February 2015, 20:42   #45  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 497
No need to make this so complicated.

"Image doubling" doubles horizontal as well as vertical (width and height) pixels, while quadrubling is basically 2x "Image doubling" applied. And since this is directly connected to the way NNEDI3 works, you can 2x2x2x2.... as much as you would like with subsequent "doubles" (at least theoretically). But then again, even most of the current GPUs struggle with doubling (with at least NNEDI3 32 neurons). So, basically "to be continued"?

madshi always uses the "makes sense, why make it anymore complicated than it already is" approach, there is no rocket-science behind this, even though madshi usually comes up with pretty great ideas and surprises us.

Last edited by iSunrise; 21st February 2015 at 20:52.
iSunrise is offline   Reply With Quote
Old 21st February 2015, 22:18   #46  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,127
Your chart appears logical to me. But I also question why the output would be converted to RGB, immediately converted back to YCbCr only to be converted back to RGB once again when image doubling. Are you sure this is correct?
Warner306 is offline   Reply With Quote
Old 21st February 2015, 23:05   #47  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,965
you should ask madshi. but he clearly said it is converted back to YCbCr using BT 709.
huhn is offline   Reply With Quote
Old 21st February 2015, 23:34   #48  |  Link
vivan
/人 ◕ ‿‿ ◕ 人\
 
Join Date: May 2011
Location: Russia
Posts: 649
I remember reading about it too. Probably keeping rgb between shaders is easier (from development point of view), even though it's not effective.

Quote:
Originally Posted by Arm3nian View Post
Also in games for example, 4xSSAA is 2x in the horizontal and 2x in the vertical. But it is still called 4x, because the result is 4x greater.
And some people call 4-tap filters (where 4 is radius) 8-tap (diameter) or even 64-tap (total pixels sampled). Cause bigger numbers are better

NNEDI is a 1-dimensional filter that doubles the resoulution.
In case on anamorphic video it's possible that it will run it in only one direction (e.g. 16:9 1440x1080 with doubling set to always). In other cases it runs it twice (1 in each direction).
So you should think not about resolution, but about directions. It's just that keeping separate settings for 2 directions doesn't make much sence, so they're combined.


About debanding.
RGB: if you check shader code, you'll see that it converts from rgb to ycbcr and then from ycbcr to rgb.

Subsampling: I believe that DX9 doesn't support writing into 16-bit subsampled textures, so... Emulating it with 2 textures is not only painful, but also will make it slower (+25% pixels to write, doesn't matter that they have less channels).
Actually I don't believe that it even supports subsampled 10/16-bit textures (this is probably the reason why we won't see 10-bit native DXVA in madVR).
So doing it after chroma upsampling is hell lot easier and doesn't impact performance.

Last edited by vivan; 21st February 2015 at 23:43.
vivan is offline   Reply With Quote
Old 21st February 2015, 23:57   #49  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 3,702
Quote:
Originally Posted by nevcairiel View Post
Deinterlacing has to be before chroma scaling, scaling interlaced chroma would be extra complexity and not make much sense, especially considering that GPUs prefer 4:2:0 in form of NV12 anyway.
Thanks, that makes me feel more confident in the placement of deinterlacing.

Quote:
Originally Posted by Arm3nian View Post
Ppi is pixel density. The definition of resolution is the amount of information/detail in an image. If one image has 4x the information as another, it is 4x the resolution. Why break up the relation between the horizontal and vertical and call it 2x the resolution, when doubling in both directions implies you get 4x the original. Makes no sense.
Because we are talking about spacial resolution. If an image has 4x the information as another, it is 2x the resolution. This is based on the definition of resolution used by everyone except digital camera makers who want to be able to say they quadrupled the resolution when they really only doubled it.

Using your nomenclature 2x the resolution implies an irrational scaling factor of square root of 2 or only doubling one dimension. It sounds great for people trying to sell new models of digital cameras but no one else talks about resolution this way.

I am describing a standard so everyone uses the same words to mean the same thing. madshi obviously agrees given image doubling and as this thread is about madVR options let us stick to the terminology madVR uses. Everyone else who works with digital image processing thinks madshi got it right so this should not cause confusion. When you scale an image in Photoshop to "200%" you get 4 times the number of pixels.

Quote:
Originally Posted by Arm3nian View Post
Also in games for example, 4xSSAA is 2x in the horizontal and 2x in the vertical. But it is still called 4x, because the result is 4x greater.
No it isn't, that is 2x SSAA. 4x SSAA uses 16 times the pixels. 2x SSAA does not only filter the vertical or horizontal dimension and you cannot have square root of two pixel sampling.

I have not been able to find a statement from AMD or Nvidia but this is representative of what I was able to find:
Quote:
SSAA, or Super-Sample Anti-Aliasing is a brute force method of anti-aliasing. It results in the best image quality but comes at a tremendous resource cost. SSAA works by rendering the scene at a higher resolution, 2x SSAA renders the scene at twice the resolution along each axis (4x the pixel count), 4x SSAA renders the scene at four times the resolution along each axis (16x the pixel) count, and 8x SSAA renders the scene at eight times the resolution along each axis (64x the pixel count). The final image is produced by downsampling the massive source image using an averaging filter. This acts as a low pass filter which removes the high frequency components that caused the jaggedness.
Quote:
Originally Posted by Warner306 View Post
Your chart appears logical to me. But I also question why the output would be converted to RGB, immediately converted back to YCbCr only to be converted back to RGB once again when image doubling. Are you sure this is correct?
I am fairly sure it is correct.

Quote:
Originally Posted by huhn View Post
you should ask madshi. but he clearly said it is converted back to YCbCr using BT 709.
It was looking for that quote but I couldn't find it. This is also relevant and changes image doubling in my chart.

Quote:
Originally Posted by madshi View Post
Quote:
Originally Posted by cyberbeing View Post
And to make sure I'm no longer confused, what is the result of the following?

640x360 4:2:0 -> 1920x1080
Chroma Upscaling = NNEDI3
NNEDI3 double Luma = Enabled
NNEDI3 double Chroma = Enabled
NNEDI3 quadruple Luma = Enabled
NNEDI3 quadruple Chroma = Disabled

  • Conversion from 4:2:0 YcbCr (320x180) to 4:4:4 RGB (640x360) with 'chroma upscaling' setting NNEDI3
  • Conversion from 640x360 4:4:4 RGB -> 640x360 4:4:4 YCbCr.
  • Y & CbCr channels doubled to 1280x720 4:4:4 YCbCr with NNEDI3
  • Y channel only doubled to 2560x1440

Is it then:
  • CbCr channels upscaled 1280x720->1920x1080 with 'image upscaling' setting
  • Y channel downscaled 2560x1440->1920x1080 with 'image downscaling' setting
  • Conversion from 4:4:4 YCbCr to RGB

Or is it:
  • CbCr channels upscaled 1280x720->2560x1440 with 'image upscaling' setting
  • Conversion from 4:4:4 YCbCr to RGB
  • 2560x1440 RGB -> 1920x1080 RGB with 'image downscaling' setting
It is the first ("CbCr channels upscaled 1280x720->1920x1080 with 'image upscaling' setting").
Quote:
Originally Posted by vivan View Post
About debanding.
RGB: if you check shader code, you'll see that it converts from rgb to ycbcr and then from ycbcr to rgb.

Subsampling: I believe that DX9 doesn't support writing into 16-bit subsamped textures, so... Emulating it with 2 textures is not only painful, but also will make it slower (+25% pixels to write, doesn't matter that they have less channels).
Actually I don't believe that it even supports subsampled 10/16-bit textures (this is probably the reason why we won't see 10-bit native DXVA in madVR).
So doing it after chroma upsampling is hell lot easier and doesn't impact performance.
Ok thanks again, that also makes sense. Debanding happens after conversion to RGB as well since it is converting from RGB.

Last edited by Asmodian; 22nd February 2015 at 00:43.
Asmodian is offline   Reply With Quote
Old 22nd February 2015, 01:44   #50  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Quote:
Originally Posted by Asmodian View Post
Because we are talking about spacial resolution. If an image has 4x the information as another, it is 2x the resolution. This is based on the definition of resolution used by everyone except digital camera makers who want to be able to say they quadrupled the resolution when they really only doubled it.

Using your nomenclature 2x the resolution implies an irrational scaling factor of square root of 2 or only doubling one dimension. It sounds great for people trying to sell new models of digital cameras but no one else talks about resolution this way.

I am describing a standard so everyone uses the same words to mean the same thing. madshi obviously agrees given image doubling and as this thread is about madVR options let us stick to the terminology madVR uses. Everyone else who works with digital image processing thinks madshi got it right so this should not cause confusion. When you scale an image in Photoshop to "200%" you get 4 times the number of pixels.



No it isn't, that is 2x SSAA. 4x SSAA uses 16 times the pixels. 2x SSAA does not only filter the vertical or horizontal dimension and you cannot have square root of two pixel sampling.

I have not been able to find a statement from AMD or Nvidia but this is representative of what I was able to find:
Replace resolution with 'information'. If something has 4x the information of another thing, saying it has 2x instead makes no sense. If you want to say 2x the resolution, then you would have to independantly define and state that resolution x is 2 times greater horizontally and 2 times greater vertically than resolution y.

You are correct that we are talking about spacial resolution. If an image is 3840x2160, but its spacial resolution is lower, meaning the pixels are much more spread out, it might look the same as the same image at 1920x1080 at the correct distance. But, the 3840x2160 image still has more information, it just displayed incorrectly/differently.

Photoshop refers to 200% as 4x the pixels because it scales the horizontal and vertical together. Just like nnedi3. If it only scaled in one dimension, what would you call it? What is 3840x1080 compared to 1920x1080... using your defintion? You can't describe it. But you can say it is 2x the resolution, and 3840x2160 is 4x the resolution. It makes sense to refer to "scaling" as 2x, because you scale both directions together. But it does not make sense to refer to the entire resolution as 2x.

In triple A games, 4xSSAA is 4x the resolution. If 4xSSAA was 16x the pixels, it wouldn't exist in any games, you would need 4 gtx 980s running at 2200mhz cooled by ln2, obviously not realistic to implement.

Also, check the dynamic super resolution in your nvidia control panel. I am currently on a 1920x1200 monitor, and 4x the resolution is referred to as 3840x2400. Same multiplier scale is used for all the other dynamic super resolutions. 2x the resolution is 2715x1697.
Picture here: http://i.imgur.com/ID3XoBc.png

Quote:
Originally Posted by vivan View Post
So you should think not about resolution, but about directions. It's just that keeping separate settings for 2 directions doesn't make much sence, so they're combined.
Questionable. Nvidia inspector, as shown in my next post, allows you to configure the horizontal and vertical independantly of each other. This can provide better image quality without destroying your frame rate, as the picture rendered at the higher resolution is downsampled. Obviously this might not work properly in some games, or madvr for that matter, which is why it isn't an option in the applications itself, but rather the drivers. A 1920x1080 image scaled by 4x, is now 400% larger. Photoshop refers to it as 200% because Photoshop is an image editing tool. You can select to only the scale the horizontal or vertical by 200%, doesn't mean the amount of pixels increased fourfold. The 200% that results in 4x the pixels just means you increased the resolution by 2 in both directions.

Last edited by Arm3nian; 22nd February 2015 at 02:37.
Arm3nian is offline   Reply With Quote
Old 22nd February 2015, 02:07   #51  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Also check this: http://i.imgur.com/7VaN0Tc.png

Notice how SSAA is referred in both the horizontal and vertical resolution. 2*2=4, so 2x2SSAA is shortened as 4xSSAA. This can be confirmed by looking below at the combined SSAA and MSAA options. 32xS is 2x2SSAA + 8xMSAA, 2*2=4, 4*8=32. This is the most logical method of labeling the values.

Last edited by Arm3nian; 22nd February 2015 at 02:08. Reason: big picture sorry
Arm3nian is offline   Reply With Quote
Old 22nd February 2015, 02:39   #52  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 3,702
Quote:
Originally Posted by Arm3nian View Post
Replace resolution with 'information'. If something has 4x the information of another thing, saying it has 2x instead makes no sense. If you want to say 2x the resolution, then you would have to independantly define and state that resolution x is 2 times greater horizontally and 2 times greater vertically than resolution y.
This is exactly my point. The sort hand "2x the resolution" means 2 times greater horizontally and 2 times greater vertically. You cannot replace resolution with information; they are not synonymous.

To be precise "2x the resolution" is ambiguous, it could have either interpretation. It would be more precise to say "2x the resolution horizontally and vertically" or "twice the number of pixels".

There are reasonable reasons to talk about resolution as the total number of pixels but that is not the convention normally used when discussing digital images and video (unless you are a marketing department).

2x scaling gives 2x the resolution (Nvidia's DSR notwithstanding, they only want to describe the performance hit and do not care about conventions).

Anyway I am happy to agree to disagree, only know if I say "2x the resolution" I mean 4 times the number of pixels.
Asmodian is offline   Reply With Quote
Old 22nd February 2015, 03:33   #53  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Quote:
Originally Posted by Asmodian View Post
This is exactly my point. The sort hand "2x the resolution" means 2 times greater horizontally and 2 times greater vertically. You cannot replace resolution with information; they are not synonymous.

To be precise "2x the resolution" is ambiguous, it could have either interpretation. It would be more precise to say "2x the resolution horizontally and vertically" or "twice the number of pixels".

There are reasonable reasons to talk about resolution as the total number of pixels but that is not the convention normally used when discussing digital images and video (unless you are a marketing department).

2x scaling gives 2x the resolution (Nvidia's DSR notwithstanding, they only want to describe the performance hit and do not care about conventions).

Anyway I am happy to agree to disagree, only know if I say "2x the resolution" I mean 4 times the number of pixels.
I'm just going off the fact that the term resolution was created to describe information. You can say that 3840x2160 is 2x the resolution of 1920x1080, but what does that mean? It is a useless definition if resolution does not imply information. Maybe it does make sense when strictly talking about photography or video, but that is because we already know doubling the resolution in both directions leads to 4x the pixels, and are just trying to talk about the size. Resolution on its own is a meaningless value if not talking about information. In engineering, the resolution bandwidth of a spectrum analyzer for example implies more samples, therefore shows more information.
Arm3nian is offline   Reply With Quote
Old 22nd February 2015, 11:04   #54  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 3,702
Quote:
Originally Posted by Arm3nian View Post
I'm just going off the fact that the term resolution was created to describe information. You can say that 3840x2160 is 2x the resolution of 1920x1080, but what does that mean?
It means there are twice as many pixels, both horizontally and vertically; you can resolve details that are half the size.

Quote:
Originally Posted by Arm3nian View Post
It is a useless definition if resolution does not imply information. Maybe it does make sense when strictly talking about photography or video, but that is because we already know doubling the resolution in both directions leads to 4x the pixels, and are just trying to talk about the size. Resolution on its own is a meaningless value if not talking about information. In engineering, the resolution bandwidth of a spectrum analyzer for example implies more samples, therefore shows more information.
When analyzing information the purpose of using the term "resolution" instead of "samples" is to signify the relative spacing of samples in a data set. In a 1D set twice the resolution is twice the samples, in a 2D set twice the resolution is 4 times the samples, in a 3D set it is 8 times the samples, 4D 16, 5D 32, etc. If you want to discuss the total number of samples instead use "samples", "pixels", or similar.

If you have twice the resolution you expect to be able to resolve details at half the size. For a 2D space you need four times the number of pixels to resolve details at half the size. e.g. If the smallest filament you can resolve with an imaging system is 0.1 mm in diameter and you need to be able to resolve one with a diameter of 0.05 mm you need a sensor with double the resolution. It is not useful to raise the change in resolution required to the power of the number of possible axes when discussing a needed change in sampling. It simply removes meaning. Saying you need 8 times the resolution to resolve a 3D object at half the size is pointless, why not say you need 8 times the samples if that is how you want to express it?

Resolution is a term used to describe data sets which are both a type of information themselves and contain information. The relative change in resolution equals the relative change in the number of samples only when talking about a data set with one dimension.

At this point I think I have expressed my option on the matter as well as I can.
Asmodian is offline   Reply With Quote
Old 22nd February 2015, 11:30   #55  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Quote:
Originally Posted by Asmodian View Post
It means there are twice as many pixels, both horizontally and vertically; you can resolve details that are half the size.



When analyzing information the purpose of using the term "resolution" instead of "samples" is to signify the relative spacing of samples in a data set. In a 1D set twice the resolution is twice the samples, in a 2D set twice the resolution is 4 times the samples, in a 3D set it is 8 times the samples, 4D 16, 5D 32, etc. If you want to discuss the total number of samples instead use "samples", "pixels", or similar.

If you have twice the resolution you expect to be able to resolve details at half the size. For a 2D space you need four times the number of pixels to resolve details at half the size. e.g. If the smallest filament you can resolve with an imaging system is 0.1 mm in diameter and you need to be able to resolve one with a diameter of 0.05 mm you need a sensor with double the resolution. It is not useful to raise the change in resolution required to the power of the number of possible axes when discussing a needed change in sampling. It simply removes meaning. Saying you need 8 times the resolution to resolve a 3D object at half the size is pointless, why not say you need 8 times the samples if that is how you want to express it?

Resolution is a term used to describe data sets which are both a type of information themselves and contain information. The relative change in resolution equals the relative change in the number of samples only when talking about a data set with one dimension.

At this point I think I have expressed my option on the matter as well as I can.
I agree on all points. I think the problem we have is that 'resolution' on its own is a very ambiguous term. Are we talking about image resolution, pixel resolution, spatial resolution, or even resolution that has nothing to do with digital pixels like my spectrum analyzer example.

Nvidia most likely uses 4x in the DSR to show performance impacts, as the gpu will render 4x the amount of pixels. For photography and video I see why 2x would make sense. 4x when talking about scaling might give the wrong impretion on expected quality. The size of the image is important to consider. It would be to the benefit of all if everyone was more descriptive, but as you already know, it is hard to get an entire industry to agree on something.
Arm3nian is offline   Reply With Quote
Old 23rd February 2015, 17:20   #56  |  Link
newguy1
Registered User
 
Join Date: Jan 2015
Posts: 2
Isn't the whole 2x or 4x resolution an industry standard that the 4k tv decided to overlook?

You know, a standard? Like from a textbook?
newguy1 is offline   Reply With Quote
Old 24th February 2015, 03:02   #57  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,127
Quote:
Originally Posted by newguy1 View Post
Isn't the whole 2x or 4x resolution an industry standard that the 4k tv decided to overlook?

You know, a standard? Like from a textbook?
The marketing always seems to skew towards selling more televisions as opposed to providing technical information, but I don't know why this simple topic has to cover so many consecutive posts.

A 720p source has 2.25x the number of pixels of a 1080p source. However, the actual increase in vertical resolution or (pixels per inch) is only 1.5x. The amount of dots per inch is a better indication of quality than the total number of pixels.

Moving from 1080p to 4K, the number of pixels increases to 4x, but the scaling factor only improves to a mere 2x.

A better question is whether anyone can resolve the extra pixels per inch from their seating distance, which they probably cannot.
Warner306 is offline   Reply With Quote
Old 24th February 2015, 05:36   #58  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Quote:
Originally Posted by Warner306 View Post
The marketing always seems to skew towards selling more televisions as opposed to providing technical information, but I don't know why this simple topic has to cover so many consecutive posts.

A 720p source has 2.25x the number of pixels of a 1080p source. However, the actual increase in vertical resolution or (pixels per inch) is only 1.5x. The amount of dots per inch is a better indication of quality than the total number of pixels.

Moving from 1080p to 4K, the number of pixels increases to 4x, but the scaling factor only improves to a mere 2x.

A better question is whether anyone can resolve the extra pixels per inch from their seating distance, which they probably cannot.
Everyone in this thread understands 4x vs 2x. We were just debating on the correct terminology. In my first post I mentioned 1920x1080 is 4x the resolution of 3840x2160. Saying 2x horizontal and 2x vertical resolution is the same. 2x scaling just makes it confusing. 3840x2160 is 4x more demanding than 1920x1080, so we should refer to it as 4x unless otherwise specified that you are talking about the pixel count in 1 direction.

Think about a water bottle. If I have a water bottle that is 1ft tall, and I increase it to 2ft, I now have a water bottle that is 2x bigger, because it holds 2x the water. If I increase it to 4ft, then I have a watter bottle that holds 4x as much as water as the initial bottle, making it 4x bigger. Now lets say I invent a gun that makes objects 2x bigger and I shoot my water bottle. Is it going to make it only 2x taller? No, it is going to multiply ALL dimensions by a factor of 2. Now the water bottle does not hold 2x as much water, it holds 8x as much water. So in theory, increasing something by a factor of 2 is relative on the amount of dimensions. In 1d, a factor of 2 increases the size by 2, in 2d, a factor of 2 increases the size by 4, and in 3d, a factor of 2 increases the size by 8. In other words, it more precise to refer to these things by what matters. In resolution, the pixel count, which directly relates to the performance. And in the case of the water bottle, to the volume, which directly reflects the amount of water it can hold. Making something 2x bigger just implies that the actual value did not increase linearly, so why not just refer to the actual value directly... makes it more simple, and more logical.

Resolution is just another word for pixel count. A resolution of 2 means 2 pixels. A resolution of 3840x2160 means 8294400 pixels. The image is just the object. The resolution is the quantity being defined. Just like a water bottle is the object and the volume is being defined. This is why madshi refers to it as image doubling, because making an image 2x bigger creates 4x the pixels. Saying the resolution in 1920x1080 to 3840x2160 doubled makes no sense, because it quadrupled. The image itself doubled.

4k might provide an increase in perceived image quality on a TV, but it is more noticeable on a monitor to me since I sit close.

Last edited by Arm3nian; 26th February 2015 at 08:41.
Arm3nian is offline   Reply With Quote
Old 25th February 2015, 05:40   #59  |  Link
resides
Registered User
 
Join Date: Sep 2012
Posts: 7
Thanks for the write up.

Definitely helped me and clarified things too.
resides is offline   Reply With Quote
Old 26th February 2015, 03:52   #60  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 3,702
Quote:
Originally Posted by huhn View Post
you could add IVTC to deinterlacing. and you my add an comment that force film mode doesn't work with native DXVA
I never use DXVA decoding but I feel that adding notes on its ramifications where appropriate could be helpful. Are there any other details or issues you know of with DXVA native decoding? Thanks.

Quote:
Originally Posted by resides View Post
Thanks for the write up.

Definitely helped me and clarified things too.
Thanks, I am glad it helped a bit.

Last edited by Asmodian; 26th February 2015 at 04:02.
Asmodian is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 23:25.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.