Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 3rd July 2015, 19:00   #31501  |  Link
Anime Viewer
Troubleshooter
 
Anime Viewer's Avatar
 
Join Date: Feb 2014
Posts: 339
Quote:
Originally Posted by digby View Post
I'm getting lockups after updating to the latest version of madvr. it was either madvr or strongene hevc decoder i recently installed. disabled strongene and lockups still occur. playing hevc with lav, ffdshow raw, & svp on intel gpu.
The usual questions asked when people report lockup/freezing:

Did you try resetting madVR to default settings, and see if the problem still exists?

Have you scanned your system for corrupt/mislinked filters/codec using something like the codec tweak tool? If not do so, fix any errors it finds, and to be on the safe side you can also tell it to associate default codec/filters. Sometimes disabling decoders/filters/codec doesn't really stop them, so you may have to uninstall the strongene decoder you reported installing.
__________________
System specs: Sager NP9150 SE with i7-3630QM 2.40GHz, 16 GB RAM, 64-bit Windows 10 Pro, NVidia GTX 680M/Intel 4000 HD optimus dual GPU system. Video viewed on LG notebook screen and LG 3D passive TV.
Anime Viewer is offline   Reply With Quote
Old 3rd July 2015, 19:39   #31502  |  Link
THX-UltraII
Registered User
 
Join Date: Aug 2008
Location: the Netherlands
Posts: 851
video chain question, again......

Is this the best (and even more imporant: correct) video chain when my main purpose is Blu-Ray movie playback:

Blu-Ray .m2ts file
Device properties in madVR on PC levels (0-255)
the native display bit depth on 10 bit (or higher)
LAV filters video settings: RGB output on untouched (as is)
display device (Epson TW9200 projector) on EXTENDED
THX-UltraII is offline   Reply With Quote
Old 3rd July 2015, 20:12   #31503  |  Link
THX-UltraII
Registered User
 
Join Date: Aug 2008
Location: the Netherlands
Posts: 851
Quote:
Originally Posted by THX-UltraII View Post
Now that (I think) that I know how all things works I ll try to summarize the BTB/WTW discussion. You basically have 3 options:

1.
Set madvr to pc levels, set gpu to pc levels, set TV to pc levels (0-255). This means, that BTB and WTW will be clipped, it also means that there is expansion involved, which might lead to banding. madvr dithering does a good job of eliminating that.

2.
Set madvr to TV levels, set GPU to pc levels, set TV to TV levels. This means there will be neither clipping nor expanding.

3.
If your TV doesn't have settings to change black levels and is fixed to TV levels: Set madvr to PC, GPU to TV. BTB and WTW will be clipped, and there is double processions going on (expansion by madvr -> reduction by gpu).

The second option is the best quality wise, because there is no processing of the levels involved, and therefore no banding introduced. The problem is that if you use your PC for anything but viewing through madvr, your levels will be messed up.

There is also a 4th option if you want both limited(video) and full-range(pc applications) to display properly. If you can adjust input mapping on your display (usually low or limited vs. normal or full) then use option 2 and switch display input to normal when displaying desktop applications.
I found my own post from a few years ago. Does this still apply and is it correct how I stated it back then?
THX-UltraII is offline   Reply With Quote
Old 3rd July 2015, 21:08   #31504  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,903
Quote:
Originally Posted by THX-UltraII View Post
I found my own post from a few years ago. Does this still apply and is it correct how I stated it back then?
you should clipping everything below 16 and everything over 235 that's simply how is should work. not clipping it will destroy the CR.

best is pure PC level if your end device supports PC level properly.


i only heard that some panasonic supports PC level but the results are bad with banding for what ever reason.
huhn is offline   Reply With Quote
Old 3rd July 2015, 22:06   #31505  |  Link
Barnahadnagy
Registered User
 
Join Date: Apr 2014
Posts: 13
Interestingly, my ST50 doesn't have banding issues at PC levels.
About chroma scaling: There are very few scenes / movies where I can actually see a difference. One of these is the following example, where the chroma coding is atrocious (don't ask how it turned out to be like that). I think this demonstrates the potential in Bilateral quite well. Interestingly SuperChromaRes makes about no difference here (And neither does Bicubic vs Jinc, softcubic does make aliasing better but at 100 it starts to bleed out of edges too).
Barnahadnagy is offline   Reply With Quote
Old 3rd July 2015, 22:27   #31506  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,646
Quote:
Originally Posted by Barnahadnagy View Post
Interestingly, my ST50 doesn't have banding issues at PC levels.
Neither does my VT50, PC,TV or TV,PC.. look pretty much exactly the same on the greyscale ramp.
ryrynz is offline   Reply With Quote
Old 3rd July 2015, 23:05   #31507  |  Link
toniash
Registered User
 
Join Date: Oct 2010
Posts: 131
Quote:
Originally Posted by toniash View Post
Anyone knows how to solve this:
"error: input is not a PCH file: 'cl_kernel.h.pch'
fatal error: 'cl_kernel.h.pch' does not appear to be a precompiled header file" ??
I get it when trying to use s-xbr on XP SP3 and Nvidia 352.86
A driver reinstallation solved it
toniash is offline   Reply With Quote
Old 4th July 2015, 05:36   #31508  |  Link
THEAST
Registered User
 
Join Date: Apr 2009
Posts: 76
I reported an issue a while back with duplicate frames being displayed after "excessive jumping" in MPC-HC with madVR used as renderer but nobody else seemed to have experienced it. Now I have pretty much pinpointed the issue: it is caused by madVR's smooth motion and for me it is 100% reproducible with AVI and WMV files (doesn't seem to happen with MP4 and certainly doesn't happen with MKV). When I jump through a video multiple times consecutively using MPC-HC's built-in jump backward/forward shortcut keys and then stop, sometimes after less than a second some, frames are displayed that are clearly from a different scene. Disabling smooth motion eliminates the issue regardless of container. This doesn't seem to be decoder/splitter-dependent since I can reproduce it with every combination of ffdshow/LAV as decoder and Halli/LAV as splitter. This also isn't hardware-dependent since I experience this issue both on my laptop with GTX 850M and my desktop with HD7950. OS is Win 7 SP1 x64 with madVR and MPC-HC x86.

Steps to reproduce:
1. Play a WMV file with madVR used as renderer and smooth motion set to "always".
2. Start jumping back and forth in the file using MPC-HC's built-in shortcuts 3-5 times and wait for one second or two between each jump (jumping forward seems to work best).
3. Stop jumping and watch the video carefully.
4. After 1-2 seconds you should see some random irrelevant frames on the screen that last less than one second and might repeat again after a few seconds.

Last edited by THEAST; 16th October 2015 at 05:28.
THEAST is offline   Reply With Quote
Old 4th July 2015, 08:48   #31509  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Quote:
Originally Posted by THX-UltraII View Post
I found my own post from a few years ago. Does this still apply and is it correct how I stated it back then?
Yes, it does.
Personally, I use #2.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.
James Freeman is offline   Reply With Quote
Old 5th July 2015, 02:45   #31510  |  Link
digby
Registered User
 
Join Date: Sep 2014
Posts: 2
Quote:
Originally Posted by Anime Viewer View Post
The usual questions asked when people report lockup/freezing:

Did you try resetting madVR to default settings, and see if the problem still exists?

Have you scanned your system for corrupt/mislinked filters/codec using something like the codec tweak tool? If not do so, fix any errors it finds, and to be on the safe side you can also tell it to associate default codec/filters. Sometimes disabling decoders/filters/codec doesn't really stop them, so you may have to uninstall the strongene decoder you reported installing.
did the reset and use the tweak tool to set some splitters to lav - doubt it matters since i use mpc. still locked up tho'. the picture freezes but the sound continues for another 5-10 secs before silence. mouse still moves but everything is frozen. on another computer with amd gpu, there are no lockups; just this one with the intel gpu. i wonder if it has anything to do with svp - i'm using madvr on the intel gpu and svp on a 5450. i've been using this split setup for months tho' so why would it be a problem now?
digby is offline   Reply With Quote
Old 5th July 2015, 06:04   #31511  |  Link
SecurityBunny
Registered User
 
Join Date: Jul 2013
Posts: 76
Is debanding suppose to be so performance intensive? With it off, my rendering time is 10ms. With it on high/high (or even low/high), rendering time is at 30-31ms. Enabling 'don't analyze gradient angles for debanding' drops the rendering time to 20ms.

MadVR 0.88.14 x64
MPC-HC 1.7.9.30 x64
GeForce 353.38

Last edited by SecurityBunny; 5th July 2015 at 07:32.
SecurityBunny is offline   Reply With Quote
Old 5th July 2015, 07:31   #31512  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,127
Quote:
Originally Posted by SecurityBunny View Post
Is debanding suppose to be so performance intensive? With it off, my rendering time is 10ms. With it on high/high (or even low/high), rendering time is at 30-31ms. Enabling 'don't analyze gradient angles for debanding' drops the rendering time to 20ms.

MadVR 0.88.14 x64
MPC-HC 1.7.9.30 x64
GeForce 353.38
On a mid-level to low-level graphics card I would consider debanding to be a medium processing feature compared to other madVR features.
Warner306 is offline   Reply With Quote
Old 5th July 2015, 07:45   #31513  |  Link
SecurityBunny
Registered User
 
Join Date: Jul 2013
Posts: 76
Quote:
Originally Posted by Warner306 View Post
On a mid-level to low-level graphics card I would consider debanding to be a medium processing feature compared to other madVR features.
I don't think my GTX 780 classified is considered a mid/low level graphics card. Lately my rendering times have been all over the place, much higher than normal. I think madVR might be putting my card in a lower clock state since the last few updates. Is there any way to stop it from downclocking the card after a minute of playback besides running it at full power constantly?

Also, what is the recommended hardware decoder to use with madVR in LAV video settings?

Last edited by SecurityBunny; 5th July 2015 at 09:46.
SecurityBunny is offline   Reply With Quote
Old 5th July 2015, 09:49   #31514  |  Link
edigee
Registered User
 
Join Date: Jan 2010
Posts: 169
DXVA2(native). Or NVIDIA CUVID. DXVA2(copy-back) I only use for 10bit h265 60Hz videos ,which is not the case for your card because it doesn't have full hardware decoding for h265/HEVC.
I have GTX 960.
NVIDIA CUVID is working the same as DXVA2(native) in terms of CPU usage(very low on both) but somehow it gives weird colors ,for instance skin color tones are a bit reddish and less detailed.
Edit:For Kepler cards 347.88 is still the best driver. The last drivers seems to work good(well ,some of them) only for Maxwell cards. I have a gt 640 on another rig and the latest drivers(since 350.12) are causing issues of all kind.

Last edited by edigee; 5th July 2015 at 10:09.
edigee is offline   Reply With Quote
Old 5th July 2015, 10:10   #31515  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,812
Quote:
Originally Posted by SecurityBunny View Post
I don't think my GTX 780 classified is considered a mid/low level graphics card. Lately my rendering times have been all over the place, much higher than normal. I think madVR might be putting my card in a lower clock state since the last few updates. Is there any way to stop it from downclocking the card after a minute of playback besides running it at full power constantly?
Yes, set power profile to maximum performance in the Nvidia control panel (but just for MPC HC, not globally). Then you won't have maximum clock either, but GPU should stay in boost state.
But why would you want to do this? The GPU should choose a higher clock fast enough.

Quote:
Originally Posted by SecurityBunny View Post
Also, what is the recommended hardware decoder to use with madVR in LAV video settings?
There aren't any drawbacks over CPU when using DXVA2 CB, LAV is doing this very efficiently.
However, maybe you don't want to enable it for 4k since the Kepler VPU isn't fast enough for 4k with high bitrates.
aufkrawall is offline   Reply With Quote
Old 5th July 2015, 11:09   #31516  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,903
Quote:
Originally Posted by edigee View Post
DXVA2(native). Or NVIDIA CUVID. DXVA2(copy-back) I only use for 10bit h265 60Hz videos ,which is not the case for your card because it doesn't have full hardware decoding for h265/HEVC.
I have GTX 960.
NVIDIA CUVID is working the same as DXVA2(native) in terms of CPU usage(very low on both) but somehow it gives weird colors ,for instance skin color tones are a bit reddish and less detailed.
Edit:For Kepler cards 347.88 is still the best driver. The last drivers seems to work good(well ,some of them) only for Maxwell cards. I have a gt 640 on another rig and the latest drivers(since 350.12) are causing issues of all kind.

cuvid does technically the same as dxva copyback it just force the GPU in high powerstate which totally beats the reason to use a harware decoder and to be honest i think CUVID is totally worthless these days. and the reddish would be a bug do you have a screen with OSD ?
huhn is offline   Reply With Quote
Old 5th July 2015, 11:56   #31517  |  Link
SecurityBunny
Registered User
 
Join Date: Jul 2013
Posts: 76
Quote:
Originally Posted by aufkrawall View Post
Yes, set power profile to maximum performance in the Nvidia control panel (but just for MPC HC, not globally). Then you won't have maximum clock either, but GPU should stay in boost state.
But why would you want to do this? The GPU should choose a higher clock fast enough.
Unfortunately that is the exact thing I would like to avoid. Ideally I'd like to keep power usage and temperature down, not run the card at maximum power for the duration of an entire video. I've been aiming for a rendering time under the vsync interval for smooth playback. When I first start a video, rendering time is 8ms with the GPU usage clocked at ~20% at 1,110mhz. After a minute or so playing the content, rendering time becomes 18ms with GPU usage spiked to ~45% at 666mhz.

I don't recall having this problem with the GPU throttling a few months back.

Quote:
Originally Posted by edigee View Post
DXVA2(native). Or NVIDIA CUVID.
NVIDIA CUVID is working the same as DXVA2(native) in terms of CPU usage(very low on both) but somehow it gives weird colors ,for instance skin color tones are a bit reddish and less detailed.
Quote:
Originally Posted by aufkrawall View Post
There aren't any drawbacks over CPU when using DXVA2 CB, LAV is doing this very efficiently.
However, maybe you don't want to enable it for 4k since the Kepler VPU isn't fast enough for 4k with high bitrates.
Quote:
Originally Posted by huhn View Post
cuvid does technically the same as dxva copyback it just force the GPU in high powerstate which totally beats the reason to use a harware decoder and to be honest i think CUVID is totally worthless these days. and the reddish would be a bug do you have a screen with OSD ?
Thanks. I've been using no hardware decoder for years since I read somewhere that it wasn't recommended to be used with madvr. If there isn't a problem with it decoding 10bit encoding nowadays, I'll go ahead and enable DXVA2 (native) if that is the best option out of the three for quality and speed.

Is there much of a difference between native and copy-back?

Quote:
Originally Posted by edigee View Post
Edit:For Kepler cards 347.88 is still the best driver. The last drivers seems to work good(well ,some of them) only for Maxwell cards. I have a gt 640 on another rig and the latest drivers(since 350.12) are causing issues of all kind.
Unfortunately I can not downgrade driver versions since I am on Windows 10. It automatically forces an update so I'm stuck on 353.38 for the time being. Fortunately I haven't ran into any problems with it.

Quote:
Originally Posted by edigee View Post
DXVA2(copy-back) I only use for 10bit h265 60Hz videos ,which is not the case for your card because it doesn't have full hardware decoding for h265/HEVC.
I have GTX 960.
I'm able to play h265/HEVC content. I'm assuming this is possible because it is falling back to the software decoder and that full hardware decoding only provides the benefit of being faster?

One final question. Is there a list of what 'trade quality for performance' options are safe to check that have little to no impact on the actual quality?

Last edited by SecurityBunny; 5th July 2015 at 12:20.
SecurityBunny is offline   Reply With Quote
Old 5th July 2015, 12:24   #31518  |  Link
michkrol
Registered User
 
Join Date: Nov 2012
Posts: 167
Quote:
Originally Posted by SecurityBunny View Post
Unfortunately that is the exact thing I would like to avoid. Ideally I'd like to keep power usage and temperature down, not run the card at maximum power for the duration of an entire video. I've been aiming for a rendering time under the vsync interval for smooth playback.
You need to aim below frame interval, not vsync.
If you don't get any dropped frames, I don't see how throttling would be an issue. I get almost 32ms render times with powersaving and the playback is rock solid with 24fps videos, smoothmotion enabled.
Quote:
Originally Posted by SecurityBunny View Post
I'm able to play h265/HEVC content. I'm assuming this is possible because it is falling back to the software decoder and that full hardware decoding only provides the benefit of being faster?
If you're using LAVFilters (also MPC-HC's built-in codecs), the decoder falls back to software for formats that your hardware can't decode, that includes 10bit videos on almost all GPUs.
michkrol is offline   Reply With Quote
Old 5th July 2015, 12:37   #31519  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,903
Quote:
Originally Posted by SecurityBunny View Post
Thanks. I've been using no hardware decoder for years since I read somewhere that it wasn't recommended to be used with madvr. If there isn't a problem with it decoding 10bit encoding nowadays, I'll go ahead and enable DXVA2 (native) if that is the best option out of the three for quality and speed.
all h264/h265 decoder should have the same quality or the decoder is buggy.
software is still the best way and most save way to decode videos it simply has better error handling.

hybrid h265 decoding looks like a bad joke. i wouldn't use it.

Quote:
Is there much of a difference between native and copy-back?
native has some limitation and copyback avoids these limitations.
the performance impact from copyback on nvidia system is totally meaningless copyback is just more flexible.
huhn is offline   Reply With Quote
Old 5th July 2015, 13:40   #31520  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by aufkrawall View Post
Yes, it's the AR filter which leads to the darkening.
But without, the ringing is very annoying. The ringing filter of Jinc doesn't have this issue.

Btw: Lanczos upscaling looks terrible with the cartoon sample. Ringing outta hell...
The next build will have super-xbr chroma upscaling either with the strict/agressive AR or with the high quality (but slower) madVR AR algo. So please try again with the next build (out in a couple of hours).

Quote:
Originally Posted by SecurityBunny View Post
Perhaps that may explain why present queue doesn't fill all the way initially when using 24hz, but it doesn't explain the bug. The bug occurs on all refresh rates as far as my testing goes.

The bug is when toggling to exclusive fullscreen, then windowed, then back to fullscreen again, the rendering and present queue completely drop and the playback stutters. This only occurs with D3D11 10bit. You need to pause the video and unpause it to get the queues to fill properly again. Using D3D9 10bit or D3D11 8bit, I can freely toggle in and out of exclusive fullscreen without queues not filling.

Plus the fact it takes longer going fullscreen with D3D11 10bit than compared to D3D9 10bit/D3D11 8bit.
Maybe I can see something if you create a debug log from this situation. Try to reproduce the problem quickly, so the log doesn't get too long. Then when the problem occurs, let it stutter for 10 seconds, do not change anything at that point (this is important! otherwise the log will be hard to interpret), then simply close the media player.

Quote:
Originally Posted by leeperry View Post
TYVM for providing xbr-25, I didn't have time yet but I'll recompare it all again tomorrow. This said I was plenty happy with NEDI+SR in 88.13 and I'm afraid that the EE in xbr and NNEDI3 is just part of their design, I mean you want sharper edges you got them duh.......NEDI looks more natural and seems to be less agressively seeking edges, different methods for differents needs IMO. NEDI is perfect for 720@1080p, xbr50 for tiny videos but ideally I would like to find an in-between for untouched DVD's that require something sharp but not overly.
So I can remove super-xbr-25 again?

Please post screenshots that show where/why you like NEDI better than super-xbr. Thanks.

Quote:
Originally Posted by aufkrawall View Post
I assume that NNEDI3 will never be really replaced in terms of quality (if quality is defined as the absence of aliasing and ringing and cleanest reconstruction of lines).
Don't give up hope just yet. I mean I've no idea, maybe you're right. Or maybe not, we'll see...

Quote:
Originally Posted by littleD View Post
Hello, sorry if thats naive question but how about implementing into madvr option of dynamically set a scaling algorithm basing on gpu usage.
Maybe at some time in the future, but not any time soon. Too many other things to do first.

Quote:
Originally Posted by leeperry View Post
-I still far prefer NEDI+SR for 720p@1080p because it looks natural and just really great IMHO
Screenshots?

Quote:
Originally Posted by leeperry View Post
-Actually I'm cool with the current stock settings of SR, I mean 2 passes don't seem to be enough and 4 too much, 0.75 seems to hit the spot not being too soft or too sharp, 0.10 softness looks equally good because 0.05 is edgier and 0.15 too blurry and HQ disabled looks a hell lot better too yay! When I enable HQ the "watching through a window" feeling disappears, it looks dull and sorta noisy....really hate the look, major bottleneck at work here......"tout ça pour ça"
Sorry to say, but the default settings are sharpness 1.0 and softness 0.0. Probably you still had the old defaults from the old SuperRes algo stored. Currently plan is to remove softness. Also, currently HQ enabled seems to be better in several cases, although maybe worse in some others. But current tendency goes in favor of HQ enabled.

If you have examples of where HQ looks worse, please show screenshots - thanks!

Quote:
Originally Posted by leeperry View Post
I fully agree that if anything's currently lacking in mVR it's an sxbr strength knob or at least more steps because 37 might be right up my alley and 87 yours
Not sure who you're agreeing with here, maybe yourself? But sorry, no knob planned for now.

Quote:
Originally Posted by oddball View Post
I have some confusion about the fact that there is now sharpening and Super res in 3 different places. If I am using one for upscaling and one for native res do they cross each other? For instance if I tick sharpening under Image Enhancements does it also affect chroma upscaling and/or upscaling refinement? Which order should I be using them in for which settings?
Consider the current builds as a work in progress in terms of sharpening and SuperRes. We're still trying to figure out the optimal parameters and stuff. So nobody right now can tell you exactly which is the "optimal" setting.

Quote:
Originally Posted by oddball View Post
Also. Since there is now a sharpening option can we have a luma denoiser (with sizer and strength) in future builds?
Sure, if you write one?

Quote:
Originally Posted by Dlget View Post
My specs
i5 2500k with gtx 960 , display aoc e2352phz
Why i'm getting dx3d 8bit?
& what color output i should use in nvidia setting? YCbCr or RGB in Ful range???
What should i use for my display 8bpc or 12 bpc?
i watch anime frequently & is my setting ok for it
Quote:
Originally Posted by Dlget View Post
Can anyone suggest best setting for gtx 960 OC ,i5 2500k,8 GB RAM.
I'm new to this things
These questions are being asked a lot. Most users here are tired answering the same question again every week, so that's why you didn't get many (or any?) replies yet. Might make sense to look for some madVR guides. There are several of them, some better and some worse.

FWIW, you could simply try using the default settings and check if they work alright. If they do, you can play with the settings (e.g. image upscaling or doubling) and check if your GPU can handle them and if you can see a difference or not.

Quote:
Originally Posted by XMonarchY View Post
Why is it I get 0 issues during playback on my 60Hz 1080p HDTV, but I do get crazy number of presentation glitches on my 1080p 120Hz monitor, using identical settings? Average stats rendering time is 15ms and present time is below 6-8 or so in both cases. Max stats rendering time often switches between 16ms and 33ms and present time usually stays at about 33ms, but at times goes to 140ms. I get these presentation glitches only in Exclusive mode, although I tried to disable "present several frames in advance", but that made no difference. Turning off Direct3D 11 made no difference. Enabling/Disabling "present a frame for every Vsync" also made no difference. I also tried to tone down settings to make it easier on GPU (no NNEDI3, no SuperRes, etc), but that also made no difference!
Do you have smooth motion FRC enabled? If so try to disable it.

OS? GPU?

Quote:
Originally Posted by MysteryX View Post
Agree. NEDI+SR combine very well together. Jinc+SR doesn't work well. I like that XBR has a very low performance cost compared to other algorithms, but it doesn't do nearly as good as NEDI+SR.
Screenshots?

Quote:
Originally Posted by Arm3nian View Post
Madshi you should write a madVR benchmark tool like SVP has. Then all the users can upload their results to a public spreadsheet. It would help troubleshoot performance problems by allowing comparisions and give a general rule of what to expect with a certain gpu/machine with different settings.
Quote:
Originally Posted by Asmodian View Post
madVR is changing fast so it isn't time for standard benchmark tools yet.
^

Quote:
Originally Posted by yukinok25 View Post
Just wanted to say that the latest version is absolutely astonishing!
The image quality has improved visibly and I am using the same settings as always, no issues whatsoever.

Madshi is there a way I can donate something for this project? Do you need or accept donation?
Thanks. At the moment I don't accept donations just yet. Plan to start doing that when madVR reaches v1.0, which is still some time away...

Quote:
Originally Posted by omarank View Post
Yes, super-xbr has improved a lot in the latest version. It may replace Jinc now, but still in some cases I find Jinc a tad better due to a more natural look. Please open this image and toggle between Jinc and super-xbr. Avatar movie can be a good sample too.
That's a 4K image. Doubling that with super-xbr results in 8K. On my 1680x1050 LCD display I've no idea where to look in that image for differences. Can you create a screenshot comparison, with maybe a part extracted where the difference is especially strong in favor of Jinc?

Quote:
Originally Posted by digby View Post
I'm getting lockups after updating to the latest version of madvr. it was either madvr or strongene hevc decoder i recently installed. disabled strongene and lockups still occur. playing hevc with lav, ffdshow raw, & svp on intel gpu.
Is the media player itself also locked up (e.g. menu doesn't open, buttons don't even move when being pressed etc)?

Quote:
Originally Posted by SecurityBunny View Post
Is debanding suppose to be so performance intensive? With it off, my rendering time is 10ms. With it on high/high (or even low/high), rendering time is at 30-31ms. Enabling 'don't analyze gradient angles for debanding' drops the rendering time to 20ms.
Debanding with gradient angle analyzation is relatively computation heavy. The old madVR builds didn't analyze gradient angles for the "high" debanding preset. The latest build does. Maybe you're used to the old "high" preset, which of course was faster? You can get it back simply by checking the "don't analyze gradient angles" tweak. Of course the debanding vs detail loss ratio improves if you do let madVR analyze the gradient angles.

Quote:
Originally Posted by SecurityBunny View Post
I don't think my GTX 780 classified is considered a mid/low level graphics card. Lately my rendering times have been all over the place, much higher than normal. I think madVR might be putting my card in a lower clock state since the last few updates.
Possible. I had implemented a few smaller performance improvements. And "high" deband got slower, see above. The performance improvements were really small, though, nothing dramatic (other than super-xbr and FineSharp).
madshi is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 20:25.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.