Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 1st July 2015, 14:52   #31461  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 496
Quote:
Originally Posted by j5627429 View Post
...Regarding Fury X vs. 980ti, it seems like a no brainer. When spending $650 on a video card today, you probably want it to be HDMI2.0/HDCP 2.2 compatible to ensure it is future-proof. GTX960 and 980ti are, FuryX isn't. From what I've read recently, AMD's Fury is no different in 4K output format than my 3 year old 7970.
Personally, I'm trying to decide between the 960 and 980ti. Although 980ti does not have hardware h.265 decode for when we get 4K content, if it allows significantly higher quality settings for 1080>4K without any stuttering/glitches in playback, it is a sacrifice I'd be willing to make.
We already discussed this is another thread, Fury X seems to be limited to HDMI 1.4a, which support up to 4K 30fps just fine, which may be OK for local playback at the moment, but it's not future-proof. Apart from that, Fury X has a lot more raw shading power (at 1221MHz you have 10 TFLOPs SP) compared to the 980Ti, which is a cut-down GM200 (Titan X). Therefore, ALU-limited algorithms within madVR would probably be faster on the Fury, but that depends on the algorithms and their API. madshi uses various APIs to optimize for specific cases (e.x. NNEDI3, dithering). Fury X also has alot of raw bandwidth (512GB/s without overclocking), which can also be relevant.

So it depends what you want. If you are fine with 4K and max. 30fps, Fury X should be just fine for you.

So it really depends on your use-cases for it. We would need a comparison with Fury X and a 980 Ti when we're talking madVR, since game benches can be misleading. I am not entirely sure about 980Ti's HDCP 2.2 support, since several sites claimed that only the GTX960 supports HDMI 2.0 as well as HDCP 2.2.

Last edited by iSunrise; 1st July 2015 at 14:56.
iSunrise is offline   Reply With Quote
Old 1st July 2015, 16:24   #31462  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,812
In benchmarks, Fury X oddly seems to be hardcore bandwidth limited: more core clock scales hardly, but more HBM clock gives a significant boost.
There's currently also the beeping tone which annoys many people.
The card is probably also limited with NNEDI3 by the interop copyback of the catalyst driver.
Well, its VPU is superior to GM200, but there is a whole lot of drawbacks otherwise.
aufkrawall is offline   Reply With Quote
Old 1st July 2015, 16:42   #31463  |  Link
QBhd
QB the Slayer
 
QBhd's Avatar
 
Join Date: Feb 2011
Location: Toronto
Posts: 697
Quote:
Originally Posted by James Freeman View Post
Care to elaborate?...

The difference between 20W and 1000W system is VERY significant.
Quote:
Originally Posted by noee View Post
Yeah, I'd like know this too. Even my solar/battery set up costs me in terms of battery/panel maintenance and replacement, nevermind losses in the DC-AC inverter. The proper batteries and controller(s) are not cheap.

What I've found with madVR, is that I can use the CPU for decoding all sources and max my GPU(s) for the PQ and this seems to give reasonable power draw with fantastic PQ.

It's simple, electricity is included in my rent so I can use as much as I want.

Quote:
Originally Posted by Anime Viewer View Post
If you're running with your GPU at full load with the fan(s) at full bore, and the render times on the edge of being over your vsync interval then you're not likely getting the best picture quality (not all the time).

Running your system at full throttle is also going to wear your hardware components faster than someone who runs things under control, so you're component's/system's lifespan will be shorter.
Well NEDDI3 has won in all content I have thrown at madVR, for both Chroma and Image... so yes I am getting the best PQ to my eyes by maxing out GPU performance (128 neurons is ALWAYS going to be better than 64 if you can push it that far)

As to longevity of components... Computers are made to be used and periodic bursts of +90% usage is not going to shorten the lifespan by any significant amount. Turning your computer off after using it will shorten it's life far quicker than pushing it to it's limits. In 15+ years of running my PC's 24/7 (while gaming and encoding and now using madVR) I have had just one component fail. And that was a very old motherboard who's old style capacitors finally blew up due to extreme old age.

QB
__________________
QBhd is offline   Reply With Quote
Old 1st July 2015, 16:49   #31464  |  Link
littleD
Registered User
 
littleD's Avatar
 
Join Date: Aug 2008
Posts: 343
Quote:
Originally Posted by RyuzakiL View Post
i hope we get some MadVR benchmark (using MAX NNEDI3 settings)
from guys who just purchased said Monsters hehe.

Hm.. i guess Fury X will be the better buy?
Great idea!
That fits my idea too.
Threre are so many different videos that can broke our settings and play with stuttering because our GPU is slow. What about benchmark AND then Adaptive Scaling Settings based on that benchmark. Let our graphic card use best scaling algorithm depends of its power. Let avoid situations of 4k or 60fps video makes video unwatchable. If Madvr automatically change to faster scaling, user will be happy anyway. This can make Madvr more user friendly than any time.
littleD is offline   Reply With Quote
Old 1st July 2015, 17:11   #31465  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,890
i can already read the complains about to loud GPUs ...

and is not like there is a best scaling algorithm at all. and the next issue nnedi an AMD gpu can easily run in problems with 70 % GPU usage.
huhn is offline   Reply With Quote
Old 1st July 2015, 17:22   #31466  |  Link
QBhd
QB the Slayer
 
QBhd's Avatar
 
Join Date: Feb 2011
Location: Toronto
Posts: 697
There is no one size fits all, there is no "best" (one person's NNEDI3 32 neurons is another's Lanczos8) and there is certainly no point in benchmarks and adaptive settings.

QB
__________________
QBhd is offline   Reply With Quote
Old 1st July 2015, 17:32   #31467  |  Link
Catclaw
Registered User
 
Join Date: Jun 2015
Posts: 3
Quote:
Originally Posted by Anime Viewer View Post
If you're running with your GPU at full load with the fan(s) at full bore, and the render times on the edge of being over your vsync interval then you're not likely getting the best picture quality (not all the time). Just because you're using a performance draining setting doesn't mean you'll get the best quality. Different options give different image output. While a video of real life people/environments may look good with Jinc other content (with drawn lines) may look better with Mitchell-Netavali. Some of the more taxing settings can produce artifacts, ringing, or aliasing. The second a scene gets more taxing than what your system can normally handle (be it with lighting effects, panning scenes, fancy subtitles, etc) you'll get a dropped frame or a presentation glitch and your picture/video will not be so smooth and fluid any longer.
Running your system at full throttle is also going to wear your hardware components faster than someone who runs things under control, so you're component's/system's lifespan will be shorter.
I was under the assumption that render times had to be below the movie interval, not the vsync interval, am i wrong?
Catclaw is offline   Reply With Quote
Old 1st July 2015, 17:44   #31468  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,403
Quote:
Originally Posted by Catclaw View Post
I was under the assumption that render times had to be below the movie interval, not the vsync interval, am i wrong?
No, you are correct.
__________________
madVR options explained
Asmodian is offline   Reply With Quote
Old 1st July 2015, 17:51   #31469  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,403
Quote:
Originally Posted by j5627429 View Post
Does MadVR actually take advantage of crossfire/sli?
madVR hates crossfire/sli. It runs slower with it enabled.
__________________
madVR options explained
Asmodian is offline   Reply With Quote
Old 1st July 2015, 18:56   #31470  |  Link
XMonarchY
Guest
 
Posts: n/a
Quote:
Originally Posted by nevcairiel View Post
Its not backwards. It doesn't require more bandwith, but previous versions of HDMI just didn't specify 4:2:0 support. HDMI 2.0 does specify it, so HDMI 2.0 is required.

However, because it doesn't need more bandwidth, NVIDIA GPUs can do this on HDMI 1.4 transmitters - assuming the display supports HDMI 2.0 and understands the signal.

Have to make a distinction between the HDMI transmitter speed, and the protocol version. A HDMI 1.4 transmitter can be upgraded via software to get HDMI 2.0 features like 4:2:0 .. but software cannot add more bandwidth.
I was talking about quality. Wouldn't 4:2:0 look worse than 4:2:2 and surely worse than 4:4:4 ? If it looks worse, then why include it in HDMI 2.0 anyway?!

Last edited by XMonarchY; 1st July 2015 at 19:06.
  Reply With Quote
Old 1st July 2015, 19:06   #31471  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,890
Quote:
Originally Posted by XMonarchY View Post
I was talking about quality. Wouldn't 4:2:0 look worse than 4:2:2 and surely worse than 4:4:4 ?
8 bit 420 has 12 bpp and 8 bit 422 16 bpp and 4:4:4 24 bpp so yes.

Cb Cr just have an even lower resolution with 420

but you ware talking about bandwidth didn't you?

EDIT:
Quote:
If it looks worse, then why include it in HDMI 2.0 anyway?!
bandwidth.

HDMI 2.0 can't even do 10 bit UHD 4:4:4 at 60 hz. it just terrible. of cause DP 1.2 can do that and it is way way older.

Last edited by huhn; 1st July 2015 at 19:08.
huhn is offline   Reply With Quote
Old 1st July 2015, 19:16   #31472  |  Link
vivan
/人 ◕ ‿‿ ◕ 人\
 
Join Date: May 2011
Location: Russia
Posts: 643
Quote:
Originally Posted by XMonarchY View Post
I was talking about quality. Wouldn't 4:2:0 look worse than 4:2:2 and surely worse than 4:4:4 ? If it looks worse, then why include it in HDMI 2.0 anyway?!
To deliver video as it is without wasting bandwidth (of course this is only true when you're using some standalone player, not PC with madVR - which can upsample chroma and do other stuff better than TV).

Last edited by vivan; 1st July 2015 at 19:19.
vivan is offline   Reply With Quote
Old 1st July 2015, 20:20   #31473  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Quote:
Originally Posted by Anime Viewer View Post
I think a benchmark program would be a waste of time. It would tell speed under certian conditions, but with so many options/variables avaialbe in madVR no two people are likely to be comparing under the exact same settings. Additionally, some systems would be able to support some settings, but not others. A lot of system would likely either crash themselves or crash the benchmark program trying to run settings that are either incompatible or too high.
The benchmark should have predetermined things to test that require the majority of gpu horsepower. There is no point in a benchmark used as a comparison tool if the user chooses their settings. The majority of users run a dx11 gpu, and the majority of that run an nvidia gpu. Plus there aren't many options that are supported on nvidia and aren't supported on amd or intel's igpu. Also, why would a stable system crash running demanding settings? That's not how processors work.

Quote:
Originally Posted by Anime Viewer View Post
MadVR is mainly about getting the best image quality while not straining the system from the power/performance/longevity of system components side. A benchmark program isn't going to tell us what looks best on our systems (what we've been testing, and reporting all these features under controlled conditions for all this time).
Maybe for you since you're on a laptop and don't want to melt a hole through your desk or legs.

Quote:
Originally Posted by James Freeman View Post
Why do you need a benchmark program? As stated by others, settings are individual thing per user.

Just keep the rendering time under frame time.
1/"frame time" = maximum rendering time before dropped frames.
Quote:
Originally Posted by QBhd View Post
There is no one size fits all, there is no "best" (one person's NNEDI3 32 neurons is another's Lanczos8) and there is certainly no point in benchmarks and adaptive settings.

QB
You guys are missing the point. A benchmark would not be used to find optimal settings. It would be used to show what a certain gpu is capable of. There is a post every day with "I want to run 'x' setting, will gpu 'y' be good enough?" With a benchmark you can easily tell if the Fury x can handle doubling to 4k rather than guessing. When buying a gpu, you look at benchmarks. The benchmarks show various applications from cad to games. Not only does this show gpu performance in a certain program, but it shows the performance relative to other cards. Why not add madVR to that list. Maybe you're looking to buy a 970 to run 128 neurons, but think about buying a 980 instead to run 256. Then you purchase the 980 and find out it can't handle it, and end up wasting money since you used the same settings. There are an infinite number of uses for a madVR benchmark. The problem is implementing it, and that's up to madshi.
Arm3nian is offline   Reply With Quote
Old 1st July 2015, 21:41   #31474  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,403
How would a benchmark differ from simply trying it? In your examples you don't need a benchmark tool (what would it do?), set the setting and see what the rendering time is. The complexity of the possible settings is the only issue and if you pick what "128 neuron doubling" means for all the other settings, source resolution, destination resolution, etc. it is easy to "benchmark" any GPU you own at "128 neuron doubling".

Like any GPU benchmark the results are highly dependent on the settings used and different users/sites use different settings so it is hard to compare results between users. Maybe a tool that played a stock video using a collection preset options with a particular player and at a specific resolution? It could report the average/min/max rendering times at each setting. This would be easy to do now, no special tool needed, but maybe review sites would include madVR performance if such a tool existed. However, it sounds like a lot of work, certainly more work than the benefit justifies before version 1.0 of madVR.

madVR is changing fast so it isn't time for standard benchmark tools yet.
__________________
madVR options explained
Asmodian is offline   Reply With Quote
Old 1st July 2015, 21:57   #31475  |  Link
QBhd
QB the Slayer
 
QBhd's Avatar
 
Join Date: Feb 2011
Location: Toronto
Posts: 697
^^

Well put. Let's put this slightly off topic to bed.

QB
__________________
QBhd is offline   Reply With Quote
Old 1st July 2015, 22:04   #31476  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Quote:
Originally Posted by Asmodian View Post
How would a benchmark differ from simply trying it? In your examples you don't need a benchmark tool (what would it do?), set the setting and see what the rendering time is. The complexity of the possible settings is the only issue and if you pick what "128 neuron doubling" means for all the other settings, source resolution, destination resolution, etc. it is easy to "benchmark" any GPU you own at "128 neuron doubling".
I can't test a gpu's performance in madVR if I don't have the gpu. It will help when buying a new card because I can see what scores others have gotten. Users can also post stock and overclocked results. I don't want it to be an e-peen tool, just something to view relative performance. Nothing like that currently exists. The best you can do is calculate how much faster a certain gpu is from another by using non madVR benchmarks, then apply that percentage to your current gpu's render clocks and get an estimated projection. Not very accurate.

Quote:
Originally Posted by Asmodian View Post
Maybe a tool that played a stock video using a collection preset options with a particular player and at a specific resolution? It could report the average/min/max rendering times at each setting. This would be easy to do now, no special tool needed, but maybe review sites would include madVR performance if such a tool existed.
Yes this is precisely what I envision. If madVR is included in benchmarks carried out by popular sites it will gain more publicity and attract more users, including enthusiasts.

Quote:
Originally Posted by Asmodian View Post
However, it sounds like a lot of work, certainly more work than the benefit justifies before version 1.0 of madVR.

madVR is changing fast so it isn't time for standard benchmark tools yet.
I agree, there are lots of new things happening currently. When it is a bit more finalized then a benching tool can be of use.
Arm3nian is offline   Reply With Quote
Old 1st July 2015, 22:14   #31477  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by Asmodian View Post
How would a benchmark differ from simply trying it?
Reading a number off a publicly available table (or better, a chart) is much easier than having to manually try dozens of combinations. The former takes seconds, the latter takes hours.
e-t172 is offline   Reply With Quote
Old 1st July 2015, 22:19   #31478  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,890
Quote:
Originally Posted by e-t172 View Post
Reading a number off a publicly available table (or better, a chart) is much easier than having to manually try dozens of combinations. The former takes seconds, the latter takes hours.
and what is the point if you never really judge the picture quality of the scaler yourself?
huhn is offline   Reply With Quote
Old 1st July 2015, 22:43   #31479  |  Link
Arm3nian
Registered User
 
Join Date: Jul 2014
Location: Las Vegas
Posts: 177
Quote:
Originally Posted by huhn View Post
and what is the point if you never really judge the picture quality of the scaler yourself?
Nearly all of the settings, including needi3 and the newely implemented ones have a positive correlation between quality and GPU usage.
Arm3nian is offline   Reply With Quote
Old 1st July 2015, 23:05   #31480  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by huhn View Post
and what is the point if you never really judge the picture quality of the scaler yourself?
  • Being able to know which ones you can cross out from the start because they won't be fast enough.
  • When you know what the scaler looks like, but you don't know how fast it will be because you're using a new GPU.
  • Sometimes we trust other people to be the experts on figuring out what scalers are the best or even worth considering. Not everybody has dozens of hours to spend trying lots of combinations of settings across a wide variety of samples. They just want something that is most likely to look best on typical content considering their hardware.

There are tons of reasons why having easily accessible performance data for various option combinations is a very good idea. I suspect it would also help madshi know what is worth optimizing.

Last edited by e-t172; 1st July 2015 at 23:09.
e-t172 is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 11:22.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.