Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 28th May 2015, 09:03   #30441  |  Link
Xaurus
Registered User
 
Join Date: Jun 2011
Posts: 288
I get the compiler error with the latest 88.10, Win 7 x64 running 32-bit madvr. Nvidia drivers 350.12.
__________________
SETUP: Win 10/MPC-HC/LAV/MadVR
HARDWARE: Fractal Design Node 804 | Xeon E3-1260L v5 | Supermicro X11SSZ-TLN4F | Samsung 2x8GB DDR4 ECC | Samsung 850 EVO 1TB | MSI GTX 1650 Super | EVGA G2 750
Xaurus is offline   Reply With Quote
Old 28th May 2015, 09:05   #30442  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by huhn View Post
ok now things getting interesting.

i updated to 0.88.10 again and tried 64 bit first and it was working for both 32 and 64 bit again but don't worry i found a way to break 64 bit again.

after this i installed 0.88.8 again and than back to 0.88.10 again and started 32 bit first this time. now it is broken again for 64 bit.

so i guess it has something to do with the 32 bit version of openCL that is now used.
And what happens if you go back to v0.88.8 and test that with the 32bit madVR first? I suppose probably you'll get a black screen then, too?

Quote:
Originally Posted by ryrynz View Post
On my HD4000 I get some black screen blinking and frame judder when changing going fullscreen windowed or FSE pretty sure it's been that way since D3D 11 options came out.
I think that's a bug in the Intel GPU driver. Doesn't happen with AMD or NVidia. However, it doesn't happen on my HD4000, either. So it's hard for me to work on this. I'll get a newer laptop with Intel GPU later this year, though, then I can investigate this in depth.

Quote:
Originally Posted by 6233638 View Post
I don't remember it causing problems for Mitchell-Netravali.
Though it is a low-ringing algorithm, there is still some ringing which persists that AR took care of.
Ah, thanks. So I'll keep it for Mitchell.

Quote:
Originally Posted by 6233638 View Post
Because the goal of chroma scaling is to try and have the chroma image match the luma image as much as possible so that it does not bleed out the lines etc.
If you always scale it up 2x to match the luma resolution, and then scale that resulting image as one, then your chroma scaling results will always be the same.
There's nothing in madVR that tries to match the chroma channel to the luma channel atm, except if you use SuperRes for chroma. So I see no quality benefit to be had by scaling chroma to luma resolution first. Except if downscaling in R'G'B' produces better results than downscaling in Y'CbCr, which I'm not 100% sure about.

But please do try to find a difference via screenshots! You can use v0.88.8 and v0.88.10 to compare. If you can find a quality difference in a screenshot, I'll definitely adjust my code accordingly. (Please don't test with exactly 50% right now, though, see below).

Quote:
Originally Posted by 6233638 View Post
If you scale chroma to an arbitrary resolution, rather than matching it to the luma resolution first, the results will vary depending on your source resolution (I suppose that could be handled via presets now) and output resolution.
I'm not sure I agree. Can you show a difference in screenshots?

Quote:
Originally Posted by 6233638 View Post
In theory, if I'm watching 4K with linear light scaling disabled, would that mean the chroma image is basically being displayed unscaled?
An exact 50% downscale is a special case. Currently in that situation I'm just shifting the chroma channel by 0.5 pixel by using bilinear interpolation. I've on my to do list to revisit this special case, probably I'll shift the image by using the selected chroma upscaling algorithm instead, or something like that.

Quote:
Originally Posted by ryrynz View Post
That's right... just seemed odd at the time, think I've gotten used to seeing options in MPDN for those.. anyway.
Probably should've re-read the Jinc removal change I thought it was just 8.. but 4 too? =/ Was nothing wrong with 4 IMO.
There was no real quality advantage using Jinc4. It didn't have much stronger ringing, but it also wasn't really noticeably sharper or less aliased. Basically Jinc4 looks almost identical to Jinc3, while being noticeably slower. That's why I removed Jinc4. Makes no sense to waste performance on Jinc4 if it looks (virtually) identical to Jinc3.

If you have a different opinion, please show me a screenshot comparison where Jinc4 looks noticeably better than Jinc3, and I'll add Jinc4 back in immediately. If it's just a minor improvement in sharpness, though, maybe using LumaSharpen with very low values would get an even bigger quality improvement with less performance loss?

Quote:
Originally Posted by ryrynz View Post
Madshi are the scaling options in madVR displayed in that particular order for a reason?
The algos are sorted for performance. Up: Fast. Down: Slow. And those that perform the same are sorted for logic. E.g. Catmull-Rom is the same as Bicubic50, IIRC, so they are next to each other. And SoftCubic and Bicubic are closely related, so they're next to each other.

Quote:
Originally Posted by baii View Post
For superres and lumasharpen, are we expecting to have low-mid-high option?
Yes, definitely. But we'll need to do some serious testing first to find proper presets for low/mid/high.

Quote:
Originally Posted by webs0r View Post
Looks like 2-3% difference - on a 720p 60fps test file, I get 43% load with v0.88.8 vs 40-41% with v0.88.10.
So performance has improved by roughly 6% for you. Not dramatic, but better than nothing...

Quote:
Originally Posted by shaolin95 View Post
I am trying to figure out why is my output with madvr at 10bit clearly better.
Might be that your driver is dithering. I don't know.

@huhn, at some point you said you knew registry tweaks to force the driver to not dither, didn't you? Can you write up a small summary on those registry keys? Do you know that only for NVidia or also for AMD and Intel? Thanks!

Quote:
Originally Posted by Asmodian View Post
I think I found a bug in the new independent chroma scaling mode. If I enable SuperRes for chroma when chroma is only upscaled to the target resolution and luma is downscaled I get pink & green video.

edit: Interestingly this still happens when scaling below the chroma resolution. Tested with Catmull-Rom+AR downscaling, Mitchell-Netravali or Jinc+AR chroma upscaling. Win 8.1 x64, Nvidia 352.86, madVR v0.88.10, GK110.
Ah yes, will have to look at that. When downscaling below chroma resolution, SuperRes should be disabled for chroma. When chroma is upscaled, SuperRes is supposed to still work.

Quote:
Originally Posted by Orf View Post
If device creation with debug flag is ok, then debug layer is present. But this checking is done by debug runtime at point when entire process terminate (player process in my case). So you need to check debug output after player process terminates under debugger. Another option is to use DebugView utility (never try it myself). And this messages is useless in terms to help you find what is not released exactly, they only indicate the fact that it happens, so most simple way is the same old method to double check code in place where your release D3D11 stuff (textures, render target views etc), may be you just forget to release something
I had tried to stop the media player and I had tried both the MSVC++ debugger debug log and also DebugView, but got no complaints. But now I've double checked the code and there really was something I forgot to release. I've no idea why the debug stuff doesn't seem to work on my PC. Anyway, thanks for your report, the problem should be fixed in the next build.

Quote:
Originally Posted by Zachs View Post
I don't think that's a valid usage of SuperChromaRes - FWIW MPDN prevents this from happening by applying SuperChromaRes before scaling the whole image down to target size. I'd imagine it's a madVR bug.
Applying SuperChromaRes if the downscaling factor is so large that even the chroma channel is downscaled makes no sense, of course. However, if luma is downscaled, but chroma is upscaled, SuperChromaRes should still work ok, I think. In any case, both situations seem to be buggy in madVR right now, will fix that for the next build.

Quote:
Originally Posted by James Freeman View Post
I find that High is much too aggressive and removes the overall shape of the banded "thing" (ahem...); in a word, it blends too much.
Shiandows keeps the overall shape better and does this in a less destructive way (0.50, 0.0, grain off).
I find the default settings of shiandows are somewhere between Mid and High.
Comparing Mid to Shiandows (0.5) looks much closer one to the another. When on 0.8, it is comparable to High.

Basically what I'm (everybody else?) doing here is trying to match the two algo's to one another... I think it is pointless.
In the end they are practically the same if set correctly.
Ok, thanks.
madshi is offline   Reply With Quote
Old 28th May 2015, 09:07   #30443  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Anima123 View Post
With the latest 0.88.10, weird things happened. When using NEDI + SuperRes for upscaling to playing back 765p videos, the screen suddenly became brighter and dropped frames pop up. I started the video again by quitting and restart mpc-hc64, and finished the rest video.

After quitting player and a few minutes later, the screen recovered to the original.

I guess there's something odd with Optimus system, the gig I am using is nVidia 880M + HD 4600 Optimus.
Strange. Can you reproduce that reliably by following certain steps?

Quote:
Originally Posted by Xaurus View Post
I get the compiler error with the latest 88.10, Win 7 x64 running 32-bit madvr. Nvidia drivers 350.12.
Can I see a screenshot of the complaint, please?
madshi is offline   Reply With Quote
Old 28th May 2015, 09:07   #30444  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 447
Quote:
Originally Posted by Asmodian View Post
It looks like chroma is only upscaled to the target resolution if downscaling to below 65% when chroma upscaling is set to anything but Jinc or NNEDI3. When set to Jinc or NNEDI3 the threshold is 85%.
I'm having kind of a hard time parsing your sentence So say you're downscaling 1080p to 720p, a factor of 66.67%, not using Jinc or NNEDI3, would the new path kick in? What about with Jinc/NNEDI3?
Ver Greeneyes is offline   Reply With Quote
Old 28th May 2015, 10:16   #30445  |  Link
michkrol
Registered User
 
Join Date: Nov 2012
Posts: 167
Thanks for the new version.
Image doubling is working again on my Geforce 750Ti (I had compiler errors with v0.88.9).
No bugs in v0.88.10 for me in my (somewhat limited) testing
michkrol is offline   Reply With Quote
Old 28th May 2015, 10:21   #30446  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Quote:
Originally Posted by madshi View Post
If you have a different opinion, please show me a screenshot comparison where Jinc4 looks noticeably better than Jinc3, and I'll add Jinc4 back in immediately.
After taking another look, yeah fair call. Jinc 4 taps can look worse than 3 and at best offer a very minor difference. 8 taps offers more but the cost is very high, both do worse with low res content than 3 taps surprisingly.
ryrynz is offline   Reply With Quote
Old 28th May 2015, 10:24   #30447  |  Link
Orf
YAP author
 
Join Date: Jul 2014
Location: Russian Federation
Posts: 111
Quote:
Originally Posted by madshi
I've no idea why the debug stuff doesn't seem to work on my PC
Just a guess. It comes with MS Win SDK. Mine is 8.1 version. Maybe yours is older and not have that feature ?
Orf is offline   Reply With Quote
Old 28th May 2015, 10:31   #30448  |  Link
SecurityBunny
Registered User
 
Join Date: Jul 2013
Posts: 76
Ran into a bug with v0.88.10.

When using D3D11 for presentation with present a frame for every VSync, rendering queues idle at very low numbers.

decoder queue: 22-25 / 24
subtitle queue: 21-24 / 24
upload queue: 17-20 / 20
render queue: 1-5 / 20
present queue: 1-9 / 15
Average rendering time ~9ms.

Unchecking Direct3D 11 for presentation and restarting MPC-HC, going fullscreen all queues are stable.

decoder queue: 23-24 / 24
subtitle queue: 23-24 / 24
upload queue: 19-20 / 20
render queue: 19-20 / 20
present queue: 15-16 / 16
Average rendering time ~18ms.

10 bit bitdepth set
deband high/high
Chroma Upscaling: Jinc AR
Image Upscaling: Jinc AR
Image Downscaling: Catmull-Rom AR/LL
fullscreen exclusive mode
CPU queue size: 24
GPU queue size: 20
frames in advance: 16
smooth motion: only if
ordered dithering
No trade quality options checked.

Windows 10 build 10122
Nvidia 350.12
madVR 0.8.8.10
MPC-HC 1.7.8.230
XySubFilter 3.1.0.741

Last edited by SecurityBunny; 28th May 2015 at 10:37.
SecurityBunny is offline   Reply With Quote
Old 28th May 2015, 10:36   #30449  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Ver Greeneyes View Post
I'm having kind of a hard time parsing your sentence So say you're downscaling 1080p to 720p, a factor of 66.67%, not using Jinc or NNEDI3, would the new path kick in? What about with Jinc/NNEDI3?
The factor depends on the chroma upscaling algorithm. You can test it out yourself by playing a video and then zooming down step by step until the madVR OSD changes from "image <" to "luma <".

Quote:
Originally Posted by ryrynz View Post
After taking another look, yeah fair call. Jinc 4 taps can look worse than 3 and at best offer a very minor difference. 8 taps offers more but the cost is very high, both do worse with low res content than 3 taps surprisingly.
Good to hear we agree.

Quote:
Originally Posted by Orf View Post
Just a guess. It comes with MS Win SDK. Mine is 8.1 version. Maybe yours is older and not have that feature ?
The D3D11 documentation says that if the proper SDK is not installed, device creation with the DEBUG flag fails. But it succeeds for me. So according to the doc that should mean that I have the proper SDK installed. Anyway, I think I found the bug.

Quote:
Originally Posted by SecurityBunny View Post
Ran into a bug with v0.88.10.

When using D3D11 for presentation with present a frame for every VSync, rendering queues idle at very low numbers.

decoder queue: 22-25 / 24
subtitle queue: 21-24 / 24
upload queue: 17-20 / 20
render queue: 1-5 / 20
present queue: 1-9 / 15
Average rendering time ~9ms.

Unchecking Direct3D 11 for presentation and restarting MPC-HC, going fullscreen all queues are stable.

decoder queue: 23-24 / 24
subtitle queue: 23-24 / 24
upload queue: 19-20 / 20
render queue: 19-20 / 20
present queue: 15-16 / 16
Average rendering time ~18ms.

10 bit bitdepth set
deband high/high
Chroma Upscaling: Jinc AR
Image Upscaling: Jinc AR
Image Downscaling: Catmull-Rom AR/LL
fullscreen exclusive mode
CPU queue size: 24
GPU queue size: 20
frames in advance: 16
smooth motion: only if
ordered dithering
No trade quality options checked.

Windows 10 build 10122
Nvidia 350.12
I think this only occurs with 10bit and only on some NVidia drivers. From what other users wrote, you can avoid this problem by reducing the number of prepresented frames to 8 (or maybe 10).
madshi is offline   Reply With Quote
Old 28th May 2015, 10:48   #30450  |  Link
SecurityBunny
Registered User
 
Join Date: Jul 2013
Posts: 76
Quote:
Originally Posted by madshi View Post
I think this only occurs with 10bit and only on some NVidia drivers. From what other users wrote, you can avoid this problem by reducing the number of prepresented frames to 8 (or maybe 10).
That seems to be it, thanks. Changing the bit-depth to 8 bit fixed the rendering queues. Alternatively, reducing the prepresented frames to 6 allowed the queues to fill again (8+ didn't work). Hopefully won't be an issue in the later geforce drivers.

Two quick questions.

I don't suppose it is possible to have 10 bit output with windowed mode? To my eyes, fullscreen windowed mode seems to be smoother for video playback and is more responsive.

Is there any significant benefit to higher cpu/gpu queues and presenting more video frames in advance over something small like 4/4/4?

Last edited by SecurityBunny; 28th May 2015 at 11:10.
SecurityBunny is offline   Reply With Quote
Old 28th May 2015, 10:53   #30451  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,346
Quote:
Originally Posted by SecurityBunny View Post
I don't suppose it is possible to have 10 bit output with windowed mode?
No, thats currently not possible. Windows itself would need to run in 10-bit mode, and it doesn't support that.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 28th May 2015, 11:18   #30452  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Has anyone see any difference on their 10 bit panel when using error diffusion, now that madVR allows it?
ryrynz is offline   Reply With Quote
Old 28th May 2015, 11:22   #30453  |  Link
mindz
Registered User
 
Join Date: Apr 2011
Posts: 57
Im using the Intel HD4600 iGPU and therefor use DXVA upscaling (great results imo). Am i understanding correctly that for me 10 bit output will never work because DXVA scaling is always done in 8 bit? Ive got a fairly new Sony TV (50W829B) and think its 10 bit capable. What would be the best settings for me bit-wise? I dont want any conversions from 8 to 10 and back to 8 bit if its not needed/not doing anything.
mindz is offline   Reply With Quote
Old 28th May 2015, 11:27   #30454  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Madshi, is that NNEDI3 pixel shift going to be fixed up? Been there for a while.
ryrynz is offline   Reply With Quote
Old 28th May 2015, 11:30   #30455  |  Link
Qaq
AV heretic
 
Join Date: Nov 2009
Posts: 422
What we need to output 12 bit picture? I mean true 12 bits form madVR and not zero padding from videocard.
Qaq is offline   Reply With Quote
Old 28th May 2015, 11:32   #30456  |  Link
omarank
Registered User
 
Join Date: Nov 2011
Posts: 187
My preferences haven’t changed regarding debanding algorithms. I prefer the “high” preset, with the trade quality option “don’t analyze gradient angles for debanding” selected, over the latest Shiandow’s script. Without the trade quality option selected, I see a slight blurring in the images. It may be due to stronger debanding.

I find that madVR’s debanding algos make the images look better than Shiandow’s algo does. It is perhaps due to madVR’s debanding being monochrome, as it was pointed out by 6233638. In the case of dithering algos too, I prefer mono dithering. My eyes probably like mono debanding/ dithering better.
omarank is offline   Reply With Quote
Old 28th May 2015, 11:34   #30457  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by SecurityBunny View Post
Is there any significant benefit to higher cpu/gpu queues and presenting more video frames in advance over something small like 4/4/4?
Bigger queues / more prepresented frames means more protection against frame drops when your PC gets busy doing something in the background. Also, might be useful when using smooth motion FRC or when re-rendering frames when a fade is detected (for debanding).

Quote:
Originally Posted by mindz View Post
Im using the Intel HD4600 iGPU and therefor use DXVA upscaling (great results imo). Am i understanding correctly that for me 10 bit output will never work because DXVA scaling is always done in 8 bit? Ive got a fairly new Sony TV (50W829B) and think its 10 bit capable. What would be the best settings for me bit-wise? I dont want any conversions from 8 to 10 and back to 8 bit if its not needed/not doing anything.
Scaling will be 8bit. Some other stuff after scaling might produce more bits, though, e.g. calibration stuff, levels conversion, sharpening etc. So 10bit could still be useful. The best setting is that which produces the best quality. Which nobody can judge for you without seeing your specific GPU -> display setup. You'll have to let your eyes be the judge.

Quote:
Originally Posted by ryrynz View Post
Madshi, is that NNEDI3 pixel shift going to be fixed up? Been there for a while.
Fixing it is not a problem, but can potentially lose a bit quality. I do fix the shift when you activate SuperRes, because SuperRes works better that way. But otherwise I don't fix it, if I don't have to, because not fixing it produces better quality.
madshi is offline   Reply With Quote
Old 28th May 2015, 11:38   #30458  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Qaq View Post
What we need to output 12 bit picture? I mean true 12 bits form madVR and not zero padding from videocard.
Windows does not offer any APIs to output 12bit. It supports 16bit output, but in a weird way and the GPU drivers don't handle that well. Because of that it's not supported by madVR.

Quote:
Originally Posted by omarank View Post
My preferences haven’t changed regarding debanding algorithms. I prefer the “high” preset, with the trade quality option “don’t analyze gradient angles for debanding” selected, over the latest Shiandow’s script. Without the trade quality option selected, I see a slight blurring in the images. It may be due to stronger debanding.

I find that madVR’s debanding algos make the images look better than Shiandow’s algo does. It is perhaps due to madVR’s debanding being monochrome, as it was pointed out by 6233638. In the case of dithering algos too, I prefer mono dithering. My eyes probably like mono debanding/ dithering better.
Thanks for your feedback!

Can you play with the custom deband settings? You can activate them via custom keyboard shortcut (see settings dialog), and then you can modify the settings by using the arrow keys. You can then modify the "angleBoost" and "maxAngle" values to fine tune the angle analyzation stuff. Maybe you can find a setting for "high" which you like better than the old and current "high" preset?
madshi is offline   Reply With Quote
Old 28th May 2015, 11:45   #30459  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Quote:
Originally Posted by madshi View Post
Windows does not offer any APIs to output 12bit. It supports 16bit output, but in a weird way and the GPU drivers don't handle that well. Because of that it's not supported by madVR.
Maybe it's gotten better since then? I have it enabled in MPDN on the 750Ti and I haven't encountered any issues that I know of.

Quote:
Originally Posted by madshi View Post
Fixing it is not a problem, but can potentially lose a bit quality. I do fix the shift when you activate SuperRes, because SuperRes works better that way. But otherwise I don't fix it, if I don't have to, because not fixing it produces better quality.
Potentially or definitely? If it's an easy fix why not just try it and see?
ryrynz is offline   Reply With Quote
Old 28th May 2015, 12:05   #30460  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,921
Quote:
Originally Posted by ryrynz View Post
Potentially or definitely? If it's an easy fix why not just try it and see?
you have to do an extra scaling operation so it's not good.
huhn is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 00:59.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.