Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 5th July 2015, 09:49   #31521  |  Link
edigee
Registered User
 
Join Date: Jan 2010
Posts: 169
DXVA2(native). Or NVIDIA CUVID. DXVA2(copy-back) I only use for 10bit h265 60Hz videos ,which is not the case for your card because it doesn't have full hardware decoding for h265/HEVC.
I have GTX 960.
NVIDIA CUVID is working the same as DXVA2(native) in terms of CPU usage(very low on both) but somehow it gives weird colors ,for instance skin color tones are a bit reddish and less detailed.
Edit:For Kepler cards 347.88 is still the best driver. The last drivers seems to work good(well ,some of them) only for Maxwell cards. I have a gt 640 on another rig and the latest drivers(since 350.12) are causing issues of all kind.

Last edited by edigee; 5th July 2015 at 10:09.
edigee is offline   Reply With Quote
Old 5th July 2015, 10:10   #31522  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,705
Quote:
Originally Posted by SecurityBunny View Post
I don't think my GTX 780 classified is considered a mid/low level graphics card. Lately my rendering times have been all over the place, much higher than normal. I think madVR might be putting my card in a lower clock state since the last few updates. Is there any way to stop it from downclocking the card after a minute of playback besides running it at full power constantly?
Yes, set power profile to maximum performance in the Nvidia control panel (but just for MPC HC, not globally). Then you won't have maximum clock either, but GPU should stay in boost state.
But why would you want to do this? The GPU should choose a higher clock fast enough.

Quote:
Originally Posted by SecurityBunny View Post
Also, what is the recommended hardware decoder to use with madVR in LAV video settings?
There aren't any drawbacks over CPU when using DXVA2 CB, LAV is doing this very efficiently.
However, maybe you don't want to enable it for 4k since the Kepler VPU isn't fast enough for 4k with high bitrates.
aufkrawall is offline   Reply With Quote
Old 5th July 2015, 11:09   #31523  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,496
Quote:
Originally Posted by edigee View Post
DXVA2(native). Or NVIDIA CUVID. DXVA2(copy-back) I only use for 10bit h265 60Hz videos ,which is not the case for your card because it doesn't have full hardware decoding for h265/HEVC.
I have GTX 960.
NVIDIA CUVID is working the same as DXVA2(native) in terms of CPU usage(very low on both) but somehow it gives weird colors ,for instance skin color tones are a bit reddish and less detailed.
Edit:For Kepler cards 347.88 is still the best driver. The last drivers seems to work good(well ,some of them) only for Maxwell cards. I have a gt 640 on another rig and the latest drivers(since 350.12) are causing issues of all kind.

cuvid does technically the same as dxva copyback it just force the GPU in high powerstate which totally beats the reason to use a harware decoder and to be honest i think CUVID is totally worthless these days. and the reddish would be a bug do you have a screen with OSD ?
huhn is offline   Reply With Quote
Old 5th July 2015, 11:56   #31524  |  Link
SecurityBunny
Registered User
 
Join Date: Jul 2013
Posts: 76
Quote:
Originally Posted by aufkrawall View Post
Yes, set power profile to maximum performance in the Nvidia control panel (but just for MPC HC, not globally). Then you won't have maximum clock either, but GPU should stay in boost state.
But why would you want to do this? The GPU should choose a higher clock fast enough.
Unfortunately that is the exact thing I would like to avoid. Ideally I'd like to keep power usage and temperature down, not run the card at maximum power for the duration of an entire video. I've been aiming for a rendering time under the vsync interval for smooth playback. When I first start a video, rendering time is 8ms with the GPU usage clocked at ~20% at 1,110mhz. After a minute or so playing the content, rendering time becomes 18ms with GPU usage spiked to ~45% at 666mhz.

I don't recall having this problem with the GPU throttling a few months back.

Quote:
Originally Posted by edigee View Post
DXVA2(native). Or NVIDIA CUVID.
NVIDIA CUVID is working the same as DXVA2(native) in terms of CPU usage(very low on both) but somehow it gives weird colors ,for instance skin color tones are a bit reddish and less detailed.
Quote:
Originally Posted by aufkrawall View Post
There aren't any drawbacks over CPU when using DXVA2 CB, LAV is doing this very efficiently.
However, maybe you don't want to enable it for 4k since the Kepler VPU isn't fast enough for 4k with high bitrates.
Quote:
Originally Posted by huhn View Post
cuvid does technically the same as dxva copyback it just force the GPU in high powerstate which totally beats the reason to use a harware decoder and to be honest i think CUVID is totally worthless these days. and the reddish would be a bug do you have a screen with OSD ?
Thanks. I've been using no hardware decoder for years since I read somewhere that it wasn't recommended to be used with madvr. If there isn't a problem with it decoding 10bit encoding nowadays, I'll go ahead and enable DXVA2 (native) if that is the best option out of the three for quality and speed.

Is there much of a difference between native and copy-back?

Quote:
Originally Posted by edigee View Post
Edit:For Kepler cards 347.88 is still the best driver. The last drivers seems to work good(well ,some of them) only for Maxwell cards. I have a gt 640 on another rig and the latest drivers(since 350.12) are causing issues of all kind.
Unfortunately I can not downgrade driver versions since I am on Windows 10. It automatically forces an update so I'm stuck on 353.38 for the time being. Fortunately I haven't ran into any problems with it.

Quote:
Originally Posted by edigee View Post
DXVA2(copy-back) I only use for 10bit h265 60Hz videos ,which is not the case for your card because it doesn't have full hardware decoding for h265/HEVC.
I have GTX 960.
I'm able to play h265/HEVC content. I'm assuming this is possible because it is falling back to the software decoder and that full hardware decoding only provides the benefit of being faster?

One final question. Is there a list of what 'trade quality for performance' options are safe to check that have little to no impact on the actual quality?

Last edited by SecurityBunny; 5th July 2015 at 12:20.
SecurityBunny is offline   Reply With Quote
Old 5th July 2015, 12:24   #31525  |  Link
michkrol
Registered User
 
Join Date: Nov 2012
Posts: 167
Quote:
Originally Posted by SecurityBunny View Post
Unfortunately that is the exact thing I would like to avoid. Ideally I'd like to keep power usage and temperature down, not run the card at maximum power for the duration of an entire video. I've been aiming for a rendering time under the vsync interval for smooth playback.
You need to aim below frame interval, not vsync.
If you don't get any dropped frames, I don't see how throttling would be an issue. I get almost 32ms render times with powersaving and the playback is rock solid with 24fps videos, smoothmotion enabled.
Quote:
Originally Posted by SecurityBunny View Post
I'm able to play h265/HEVC content. I'm assuming this is possible because it is falling back to the software decoder and that full hardware decoding only provides the benefit of being faster?
If you're using LAVFilters (also MPC-HC's built-in codecs), the decoder falls back to software for formats that your hardware can't decode, that includes 10bit videos on almost all GPUs.
michkrol is offline   Reply With Quote
Old 5th July 2015, 12:37   #31526  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,496
Quote:
Originally Posted by SecurityBunny View Post
Thanks. I've been using no hardware decoder for years since I read somewhere that it wasn't recommended to be used with madvr. If there isn't a problem with it decoding 10bit encoding nowadays, I'll go ahead and enable DXVA2 (native) if that is the best option out of the three for quality and speed.
all h264/h265 decoder should have the same quality or the decoder is buggy.
software is still the best way and most save way to decode videos it simply has better error handling.

hybrid h265 decoding looks like a bad joke. i wouldn't use it.

Quote:
Is there much of a difference between native and copy-back?
native has some limitation and copyback avoids these limitations.
the performance impact from copyback on nvidia system is totally meaningless copyback is just more flexible.
huhn is offline   Reply With Quote
Old 5th July 2015, 13:40   #31527  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by aufkrawall View Post
Yes, it's the AR filter which leads to the darkening.
But without, the ringing is very annoying. The ringing filter of Jinc doesn't have this issue.

Btw: Lanczos upscaling looks terrible with the cartoon sample. Ringing outta hell...
The next build will have super-xbr chroma upscaling either with the strict/agressive AR or with the high quality (but slower) madVR AR algo. So please try again with the next build (out in a couple of hours).

Quote:
Originally Posted by SecurityBunny View Post
Perhaps that may explain why present queue doesn't fill all the way initially when using 24hz, but it doesn't explain the bug. The bug occurs on all refresh rates as far as my testing goes.

The bug is when toggling to exclusive fullscreen, then windowed, then back to fullscreen again, the rendering and present queue completely drop and the playback stutters. This only occurs with D3D11 10bit. You need to pause the video and unpause it to get the queues to fill properly again. Using D3D9 10bit or D3D11 8bit, I can freely toggle in and out of exclusive fullscreen without queues not filling.

Plus the fact it takes longer going fullscreen with D3D11 10bit than compared to D3D9 10bit/D3D11 8bit.
Maybe I can see something if you create a debug log from this situation. Try to reproduce the problem quickly, so the log doesn't get too long. Then when the problem occurs, let it stutter for 10 seconds, do not change anything at that point (this is important! otherwise the log will be hard to interpret), then simply close the media player.

Quote:
Originally Posted by leeperry View Post
TYVM for providing xbr-25, I didn't have time yet but I'll recompare it all again tomorrow. This said I was plenty happy with NEDI+SR in 88.13 and I'm afraid that the EE in xbr and NNEDI3 is just part of their design, I mean you want sharper edges you got them duh.......NEDI looks more natural and seems to be less agressively seeking edges, different methods for differents needs IMO. NEDI is perfect for 720@1080p, xbr50 for tiny videos but ideally I would like to find an in-between for untouched DVD's that require something sharp but not overly.
So I can remove super-xbr-25 again?

Please post screenshots that show where/why you like NEDI better than super-xbr. Thanks.

Quote:
Originally Posted by aufkrawall View Post
I assume that NNEDI3 will never be really replaced in terms of quality (if quality is defined as the absence of aliasing and ringing and cleanest reconstruction of lines).
Don't give up hope just yet. I mean I've no idea, maybe you're right. Or maybe not, we'll see...

Quote:
Originally Posted by littleD View Post
Hello, sorry if thats naive question but how about implementing into madvr option of dynamically set a scaling algorithm basing on gpu usage.
Maybe at some time in the future, but not any time soon. Too many other things to do first.

Quote:
Originally Posted by leeperry View Post
-I still far prefer NEDI+SR for 720p@1080p because it looks natural and just really great IMHO
Screenshots?

Quote:
Originally Posted by leeperry View Post
-Actually I'm cool with the current stock settings of SR, I mean 2 passes don't seem to be enough and 4 too much, 0.75 seems to hit the spot not being too soft or too sharp, 0.10 softness looks equally good because 0.05 is edgier and 0.15 too blurry and HQ disabled looks a hell lot better too yay! When I enable HQ the "watching through a window" feeling disappears, it looks dull and sorta noisy....really hate the look, major bottleneck at work here......"tout a pour a"
Sorry to say, but the default settings are sharpness 1.0 and softness 0.0. Probably you still had the old defaults from the old SuperRes algo stored. Currently plan is to remove softness. Also, currently HQ enabled seems to be better in several cases, although maybe worse in some others. But current tendency goes in favor of HQ enabled.

If you have examples of where HQ looks worse, please show screenshots - thanks!

Quote:
Originally Posted by leeperry View Post
I fully agree that if anything's currently lacking in mVR it's an sxbr strength knob or at least more steps because 37 might be right up my alley and 87 yours
Not sure who you're agreeing with here, maybe yourself? But sorry, no knob planned for now.

Quote:
Originally Posted by oddball View Post
I have some confusion about the fact that there is now sharpening and Super res in 3 different places. If I am using one for upscaling and one for native res do they cross each other? For instance if I tick sharpening under Image Enhancements does it also affect chroma upscaling and/or upscaling refinement? Which order should I be using them in for which settings?
Consider the current builds as a work in progress in terms of sharpening and SuperRes. We're still trying to figure out the optimal parameters and stuff. So nobody right now can tell you exactly which is the "optimal" setting.

Quote:
Originally Posted by oddball View Post
Also. Since there is now a sharpening option can we have a luma denoiser (with sizer and strength) in future builds?
Sure, if you write one?

Quote:
Originally Posted by Dlget View Post
My specs
i5 2500k with gtx 960 , display aoc e2352phz
Why i'm getting dx3d 8bit?
& what color output i should use in nvidia setting? YCbCr or RGB in Ful range???
What should i use for my display 8bpc or 12 bpc?
i watch anime frequently & is my setting ok for it
Quote:
Originally Posted by Dlget View Post
Can anyone suggest best setting for gtx 960 OC ,i5 2500k,8 GB RAM.
I'm new to this things
These questions are being asked a lot. Most users here are tired answering the same question again every week, so that's why you didn't get many (or any?) replies yet. Might make sense to look for some madVR guides. There are several of them, some better and some worse.

FWIW, you could simply try using the default settings and check if they work alright. If they do, you can play with the settings (e.g. image upscaling or doubling) and check if your GPU can handle them and if you can see a difference or not.

Quote:
Originally Posted by XMonarchY View Post
Why is it I get 0 issues during playback on my 60Hz 1080p HDTV, but I do get crazy number of presentation glitches on my 1080p 120Hz monitor, using identical settings? Average stats rendering time is 15ms and present time is below 6-8 or so in both cases. Max stats rendering time often switches between 16ms and 33ms and present time usually stays at about 33ms, but at times goes to 140ms. I get these presentation glitches only in Exclusive mode, although I tried to disable "present several frames in advance", but that made no difference. Turning off Direct3D 11 made no difference. Enabling/Disabling "present a frame for every Vsync" also made no difference. I also tried to tone down settings to make it easier on GPU (no NNEDI3, no SuperRes, etc), but that also made no difference!
Do you have smooth motion FRC enabled? If so try to disable it.

OS? GPU?

Quote:
Originally Posted by MysteryX View Post
Agree. NEDI+SR combine very well together. Jinc+SR doesn't work well. I like that XBR has a very low performance cost compared to other algorithms, but it doesn't do nearly as good as NEDI+SR.
Screenshots?

Quote:
Originally Posted by Arm3nian View Post
Madshi you should write a madVR benchmark tool like SVP has. Then all the users can upload their results to a public spreadsheet. It would help troubleshoot performance problems by allowing comparisions and give a general rule of what to expect with a certain gpu/machine with different settings.
Quote:
Originally Posted by Asmodian View Post
madVR is changing fast so it isn't time for standard benchmark tools yet.
^

Quote:
Originally Posted by yukinok25 View Post
Just wanted to say that the latest version is absolutely astonishing!
The image quality has improved visibly and I am using the same settings as always, no issues whatsoever.

Madshi is there a way I can donate something for this project? Do you need or accept donation?
Thanks. At the moment I don't accept donations just yet. Plan to start doing that when madVR reaches v1.0, which is still some time away...

Quote:
Originally Posted by omarank View Post
Yes, super-xbr has improved a lot in the latest version. It may replace Jinc now, but still in some cases I find Jinc a tad better due to a more natural look. Please open this image and toggle between Jinc and super-xbr. Avatar movie can be a good sample too.
That's a 4K image. Doubling that with super-xbr results in 8K. On my 1680x1050 LCD display I've no idea where to look in that image for differences. Can you create a screenshot comparison, with maybe a part extracted where the difference is especially strong in favor of Jinc?

Quote:
Originally Posted by digby View Post
I'm getting lockups after updating to the latest version of madvr. it was either madvr or strongene hevc decoder i recently installed. disabled strongene and lockups still occur. playing hevc with lav, ffdshow raw, & svp on intel gpu.
Is the media player itself also locked up (e.g. menu doesn't open, buttons don't even move when being pressed etc)?

Quote:
Originally Posted by SecurityBunny View Post
Is debanding suppose to be so performance intensive? With it off, my rendering time is 10ms. With it on high/high (or even low/high), rendering time is at 30-31ms. Enabling 'don't analyze gradient angles for debanding' drops the rendering time to 20ms.
Debanding with gradient angle analyzation is relatively computation heavy. The old madVR builds didn't analyze gradient angles for the "high" debanding preset. The latest build does. Maybe you're used to the old "high" preset, which of course was faster? You can get it back simply by checking the "don't analyze gradient angles" tweak. Of course the debanding vs detail loss ratio improves if you do let madVR analyze the gradient angles.

Quote:
Originally Posted by SecurityBunny View Post
I don't think my GTX 780 classified is considered a mid/low level graphics card. Lately my rendering times have been all over the place, much higher than normal. I think madVR might be putting my card in a lower clock state since the last few updates.
Possible. I had implemented a few smaller performance improvements. And "high" deband got slower, see above. The performance improvements were really small, though, nothing dramatic (other than super-xbr and FineSharp).
madshi is offline   Reply With Quote
Old 5th July 2015, 14:19   #31528  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by madshi View Post
Hi there,
the latest Calman release now has some improvements when running Calman on the same PC as madTPG. The changes have been inspired by your "complaint"... So it would be awesome if you could try the new build and maybe provide feedback here:

http://www.spectracal.com/forum/view...hp?f=94&t=5729

Stacey Spears has actively asked for feedback about this. Would be great if you could tell him/us if it works better now, or if there's still something that could be improved.
Thank you. This generally seems to be working, though it doesn't seem to be entering FSE mode, so the LUT is only going to be based off 8-bit measurements rather than 10-bit.

Last edited by 6233638; 5th July 2015 at 14:22.
6233638 is offline   Reply With Quote
Old 5th July 2015, 14:33   #31529  |  Link
Anime Viewer
Troubleshooter
 
Anime Viewer's Avatar
 
Join Date: Feb 2014
Posts: 333
Quote:
Originally Posted by SecurityBunny View Post
If there isn't a problem with it decoding 10bit encoding nowadays, I'll go ahead and enable DXVA2 (native) if that is the best option out of the three for quality and speed.
Some extremely old (poorly coded) videos can show pixelation with hardware acceleration on, but usually those are rare to come across these days. To a degree some of the madVR error correction features may lessen or eliminate some of the pixelation, but its hard to say for sure. Try with it enabled, and if you see pixelation in your videos then you can set it to none instead. Native should be the fastest given that it doesn't copy data back out to the system/cpu. Depending on the speed of your non-gpu components (memory/cpu) you may see/notice a difference between copyback and native, or you may not. Bottom line is in most cases native should be faster. This is more of a LAV than madVR topic. There are quite a bit of articles/posts about native vs copyback you can find with a search engine, and if those don't address the issue to your liking you can post in the LAV forum.
__________________
System specs: Sager NP9150 SE with i7-3630QM 2.40GHz, 16 GB RAM, 64-bit Windows 10 Pro, NVidia GTX 680M/Intel 4000 HD optimus dual GPU system. Video viewed on LG notebook screen and LG 3D passive TV.

Last edited by Anime Viewer; 5th July 2015 at 14:36.
Anime Viewer is offline   Reply With Quote
Old 5th July 2015, 15:57   #31530  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,705
Quote:
Originally Posted by Anime Viewer View Post
Some extremely old (poorly coded) videos can show pixelation with hardware acceleration on
With copyback? How is this possible?

Btw: There were copyback options in madVR some time ago. What happened to them?
aufkrawall is offline   Reply With Quote
Old 5th July 2015, 16:20   #31531  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by 6233638 View Post
Thank you. This generally seems to be working, though it doesn't seem to be entering FSE mode, so the LUT is only going to be based off 8-bit measurements rather than 10-bit.
Would you mind reporting this directly in the spectracal thread (link see my post which you quoted)? I mean I can play messenger and duplicate your posts there, and Stacey's posts here, but it would make the communication between the two of you very slow, and cost me additional time. Thanks!

Quote:
Originally Posted by aufkrawall View Post
Btw: There were copyback options in madVR some time ago. What happened to them?
They got deleted because the latest LAV build has faster copyback routines than madVR ever had. If you want to use DXVA decoding with the best quality, use the LAV copyback functionality. If you want DXVA decoding with the fastest speed, use DXVA native, but then you'll get ever so slightly lower chroma quality with some GPUs (Intel, NVidia).
madshi is offline   Reply With Quote
Old 5th July 2015, 16:21   #31532  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,705
Alright, thanks.
Is OpenCL used on AMD to prevent quality loss?
aufkrawall is offline   Reply With Quote
Old 5th July 2015, 16:24   #31533  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
No. OpenCL interop cost on AMD is too high for OpenCL to be of use for things like that.
madshi is offline   Reply With Quote
Old 5th July 2015, 17:37   #31534  |  Link
omarank
Registered User
 
Join Date: Nov 2011
Posts: 180
Quote:
Originally Posted by madshi View Post
That's a 4K image. Doubling that with super-xbr results in 8K. On my 1680x1050 LCD display I've no idea where to look in that image for differences. Can you create a screenshot comparison, with maybe a part extracted where the difference is especially strong in favor of Jinc?
I was talking about chroma upscaling, not image doubling. I also mentioned it in my post. Can you please check again or do you still need the screenshot comparison to be created?


Feedback on SuperRes:
Quote:
Originally Posted by madshi View Post
Questions:

1) Which image upscaling/doubling algorithm do you like to use with SuperRes and why?
2) Which values do you like for "strength" and "softness". Please note that the default values are strength=1.0, and softness=0.0. And these *may* be the best values. But you can still try other values to see if you like them more.
3) The "use HQ downscaling" option changes the overall "look" of SuperRes a bit. Which look do you prefer? Please note that with the option turned on, you may have to increase the number of passes, because the SuperRes effect is slightly less intense with this option turned on.
4) How many passes should be used as default?
1) NNEDI3 because of the least aliasing and ringing. It looks the cleanest.
2) The default values look good to me. I tried other values but could not decide whether its getting better or worse.
3) I prefer HQ downscaling to be enabled.
4) 2 passes look fine. However, I set the number of passes to 3. There is not much difference though.
omarank is offline   Reply With Quote
Old 5th July 2015, 17:58   #31535  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by omarank View Post
I was talking about chroma upscaling, not image doubling. I also mentioned it in my post. Can you please check again or do you still need the screenshot comparison to be created?
It's still a 4K image. On a quick check I didn't know where to look. I did a quick check but didn't see any obvious differences between Jinc or super-xbr chroma upscaling.

Quote:
Originally Posted by omarank View Post
Feedback on SuperRes:

1) NNEDI3 because of the least aliasing and ringing. It looks the cleanest.
2) The default values look good to me. I tried other values but could not decide whether its getting better or worse.
3) I prefer HQ downscaling to be enabled.
4) 2 passes look fine. However, I set the number of passes to 3. There is not much difference though.
Thanks! You were using strength=1.0 and softness=0.0, right? I'm asking just to be sure, because madVR did store and reuse the settings you used from the older SuperRes.
madshi is offline   Reply With Quote
Old 5th July 2015, 19:03   #31536  |  Link
edigee
Registered User
 
Join Date: Jan 2010
Posts: 169
Quote:
Originally Posted by huhn View Post
cuvid does technically the same as dxva copyback it just force the GPU in high powerstate which totally beats the reason to use a harware decoder and to be honest i think CUVID is totally worthless these days. and the reddish would be a bug do you have a screen with OSD ?
You're right. Last time I saw those kind of difference was on a GT 640 and older versions of madVR. It seems there is OK now.
Attached Images
    
edigee is offline   Reply With Quote
Old 5th July 2015, 19:16   #31537  |  Link
THX-UltraII
Registered User
 
Join Date: Aug 2008
Location: the Netherlands
Posts: 850
It s been a while since I asked this question but are there ANY plans at all in the future to get madVR work with PowerDVD? I would LOVE to use a 3Dlut file with powerDVD!!!
THX-UltraII is offline   Reply With Quote
Old 5th July 2015, 19:19   #31538  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
madVR v0.88.15 released

http://madshi.net/madVR.zip

Code:
* super-xbr image doubling AR optimization: 20% performance boost
* super-xbr chroma upscaling now supports higher quality AR algo -> 6% slower
* super-xbr chroma "AR" option now switches between low/high AR quality
* modified Bilateral chroma upscaling algorithm parameters
Now image doubling with super-xbr is faster than doubling with Jinc AR!

For super-xbr chroma upscaling, v0.88.14 either supported "no AR" or the original super-xbr strict AR. Now v0.88.15 doesn't support "no AR", anymore. Instead the "activate anti-ringing filter" option now activates the slower high-quality madVR AR algorithm. With the "activate anti-ringing filter" option disabled, the strict AR algorithm is used instead. I know, this is slightly confusing, but I think it's more useful to offer these two different AR algos instead of offering no AR at all, considering that the strict AR is so fast that it barely makes a performance difference. I've thought about which AR algo to use when the AR option in madVR is activated. But my thinking was that users usually expect higher quality and slower performance when that option is enabled, so that's how I decided that with the option activated, the slower high-quality madVR AR algo is used, and with the option deactivated, the strict AR algo is used instead.

FWIW, at this point I think the stream of super-xbr tweaks from the previous weeks is at an end. No dramatic super-xbr changes planned for the near future, anymore.

Would still love to get more feedback about the new SuperRes algo.
madshi is offline   Reply With Quote
Old 5th July 2015, 19:52   #31539  |  Link
kasper93
MPC-HC Developer
 
Join Date: May 2010
Location: Poland
Posts: 556
Thanks for new release.

For some reason madshi.net domain is on Malvertising filter list by Disconnect (apparently uBlock addon uses this list). You might want to contact them to clear this up.
kasper93 is offline   Reply With Quote
Old 5th July 2015, 20:05   #31540  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 444
Thanks for the new release! What AR algorithm is used for super-xbr image doubling? Or does AR not make sense for that?
Ver Greeneyes is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 16:41.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.