Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 14th January 2018, 11:33   #48301  |  Link
Siso
Soul Seeker
 
Siso's Avatar
 
Join Date: Sep 2013
Posts: 714
Quote:
Originally Posted by nevcairiel View Post
D3D11 decoding is not supported on Windows 7. It requires Windows 8+.
I know, I was asking about dxva2 copy-back
Siso is offline   Reply With Quote
Old 14th January 2018, 11:33   #48302  |  Link
varekai
Suspended for forum rule violations
 
Join Date: Jul 2006
Posts: 528
Quote:
Originally Posted by Asmodian View Post
This is because you aren't currently playing something, those show was is being actively used. LAV will fall back to software if the video isn't compatible with hardware decoding.
Played some videos and BD's and still get this:
Active Decoder: <inactive>
Active Hardware Accelerator to use: <none>
varekai is offline   Reply With Quote
Old 14th January 2018, 11:53   #48303  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
In MPC-HC go Play->Filters->LAV Video to get the active LAV instance.
sneaker_ger is offline   Reply With Quote
Old 14th January 2018, 11:56   #48304  |  Link
Blackwalker
Registered User
 
Blackwalker's Avatar
 
Join Date: Dec 2008
Posts: 239
Quote:
Originally Posted by Blackwalker View Post
i agree but i'm using a 1050 with 2gb and memory is always full
Maybe with high memory i can get better results
i need also to upgrade CPU and motherboard on my htpc, i have a i5 950 with a good asus motherboarth with 6GB of ram.

anyway thx for your reply
Quote:
Originally Posted by mclingo View Post
have a think about what you want to achieve first, I have a much faster system than you but my GFX card is slower, I'm really happy with my picture quality just using NGU sharp medium, you may find after spending a crap load of money and pushing MADVR to the max you dont see that much difference in PQ, is it worth the outlay?

If you play everything well enough with the kit you have and you're not getting dropped frames perhaps just keep what you have.


so its better a 1050 ti 4GB or a 1060 6GB?
i have to order today because i sold my gpu!
Suggestions pls
Blackwalker is offline   Reply With Quote
Old 14th January 2018, 12:29   #48305  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
They have a higher model number for a reason. 1060 is a great card for madVR, get it.
ryrynz is offline   Reply With Quote
Old 14th January 2018, 12:37   #48306  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,346
Quote:
Originally Posted by Siso View Post
I know, I was asking about dxva2 copy-back
Unless you want to use a specific GPU for decoding, it makes no difference for DXVA2-CopyBack.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 14th January 2018, 12:43   #48307  |  Link
psyside
Registered User
 
Join Date: Nov 2016
Posts: 46
Quote:
Originally Posted by Blackwalker View Post
so its better a 1050 ti 4GB or a 1060 6GB?
i have to order today because i sold my gpu!
Suggestions pls
1060 is the best option in the mid to high segment, the next step up from the best budget 1050 ti.
psyside is offline   Reply With Quote
Old 14th January 2018, 12:55   #48308  |  Link
Siso
Soul Seeker
 
Siso's Avatar
 
Join Date: Sep 2013
Posts: 714
Quote:
Originally Posted by nevcairiel View Post
Unless you want to use a specific GPU for decoding, it makes no difference for DXVA2-CopyBack.
Thank you
Siso is offline   Reply With Quote
Old 14th January 2018, 15:45   #48309  |  Link
varekai
Suspended for forum rule violations
 
Join Date: Jul 2006
Posts: 528
Quote:
Originally Posted by sneaker_ger View Post
In MPC-HC go Play->Filters->LAV Video to get the active LAV instance.
Thanks for your input, appreciate it!

Ticking filter madVR in MPC-HC or PotPlayer won't "stick".
The option is there on the dropdown menu but it only opens madVR config window.
Still can't see any activity? Only get this:
Active Decoder: <inactive>
Active Hardware Accelerator to use: <none>
Well, maybe no need to dig deeper on this, I'm pleased with what my eyes sees so far.
varekai is offline   Reply With Quote
Old 14th January 2018, 15:49   #48310  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
I don't know what you are talking about.

1. Start playing a file.
2.


Note that MPC-HC has internal LAV Filters with separate settings. If in doubt delete the internal ones so you only have the external ones.
sneaker_ger is offline   Reply With Quote
Old 14th January 2018, 16:01   #48311  |  Link
varekai
Suspended for forum rule violations
 
Join Date: Jul 2006
Posts: 528
@sneaker_ger
Thanks, my bad, sorry, didn't tick the right filters, now it's showing correctly in LAV. Looks different though in PotPlayet, I'll fix that.

Got LAV showing in PotPlayer. All is good!

Last edited by varekai; 14th January 2018 at 17:09. Reason: info
varekai is offline   Reply With Quote
Old 14th January 2018, 16:49   #48312  |  Link
ashlar42
Registered User
 
Join Date: Jun 2007
Posts: 655
Quote:
Originally Posted by cork_OS View Post
I'm using CUVID instead of DXVA2cb due to Adaptive HW Deinterlacing.
Isn't HW deinterlacing the same thing happening if one uses the madVR option to deinterlace? Isn't madVR selecting the HW option available, according to the GPU in use?
ashlar42 is offline   Reply With Quote
Old 14th January 2018, 17:02   #48313  |  Link
nussman
Registered User
 
Join Date: Nov 2010
Posts: 238
Quote:
Originally Posted by ashlar42 View Post
Isn't HW deinterlacing the same thing happening if one uses the madVR option to deinterlace? Isn't madVR selecting the HW option available, according to the GPU in use?
Yes it is.

With cuvid you can get back the deinterlaced frames for further processing (ffdshow, avisynth etc.), but for madVR this is not needed.
nussman is offline   Reply With Quote
Old 14th January 2018, 17:03   #48314  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,922
i'm pretty sure nvidia has something better todo than adding two different deitnerlancer in there code.

i'm not going to test it with each new driver but they are the "same" in the past.
huhn is offline   Reply With Quote
Old 15th January 2018, 13:47   #48315  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by mzso View Post
Excruciatingly I could make three. #16-19 (17 and 18 are from the same hang but the latter is after the alert sound, when unusually the player was still hung)

I think MPC-HC has some anti-hang mechanism that activates when I hear the windows alert sound, which usually resolves the hang. Not always. Especially when I try to interact with the GUI it doesn't activate, but then I can't make a freeze report with madVR either.

As for the version information I don't know, I don't use mpc normally. LAV it seems displays less detailed information (0.70.2) in its settings windows.

Whether it turns out to be fruitful or not thanks for the effort! I've been complaining to the Potplayer/ProgDVB devs about the hangs (maybe even in the LAV thread) but none of them were willing to investigate.
Finally got around to testing this, and I found 2 possible causes of the freezes. One was fixed a couple days ago by nevcairiel (thanks!), you'll need a nightly LAV build to get the fix. The other issue seems to be caused my MPC-HC's internal subtitle renderer. It goes like this:

1) The MPC-HC main thread wants to close playback. It asks the graph thread to do the dirty work and waits for that to complete.
2) The MPC-HC graph thread obeys and tries to close playback, starting with trying to close the subtitle renderer.
3) Trying to close the subtitle renderer freezes for some reason. It seems that the subtitle renderer is waiting for the subtitle queue to receive new subtitles, and somehow the queue seems stuck.

So you might be able to reduce/fix some of the freezes by using a different subtitle renderer, or maybe by changing the subtitle settings, like queue depth or something.

Quote:
Originally Posted by ryrynz View Post
Was hoping to see the 04-11 change to AS make it in to this version, will it be in the next one?
Which 04-11 change are you referring to? I'm not sure what you mean.

Quote:
Originally Posted by mclingo View Post
Hi, MADSHI, thanks again for you time on this project, any chance you can answer the AMD card 3D question, if stopping using FSE is a solution to some problems is there another way you can implement 3D for AMD cards. I know you have a script which turns 3D on and off for NVIDIA cards but the AMD package has not such on / off toggle baked in unless its deep in the registry.
Your post is rather unclear to me. You're talking about "the AMD card 3D question" as if I should know what you mean. But I don't.

Quote:
Originally Posted by leeperry View Post
BTW, any chance for more "add grain" steps please?
I'm planning to revisit "add grain" soon.

Quote:
Originally Posted by mclingo View Post
Hi @MADSHI, do you have any advice re settings for AMD users who need to use FSE, all these hangups are gradually trashing my pc. I've had to power cycle about 50 times in the last 7 days, i've had two hard get badly corrupted, one failed altogetherl and now my lan card died today also, where these may be unrelated having to power cycle 12 hard drives constantly is taking its toll I think.

The problem seems to happen mainly if you are in 2160p resolution in that it drops into 1080p to play the movie, (it always plays it fine) but when you press stop it stops ok, I see my TV drop out of 3D mode to 2D, then I get a black screen and my PC hangs completely and has to be power cycled.
This sounds like a GPU driver issue. Have you tried reporting this to AMD? You could try different GPU driver versions. Which Windows version is this? Have you tried Windows 8.1? Windows 10 is known to be unstable.

Quote:
Originally Posted by mastrboy View Post
So a weird bug appeared after updating my Nvidia Driver, the Nvidia Geforce Experience FPS counter is now displayed during madVR video playback, but only when Exclusive Mode is activated.

Anyone else experiencing this? Or maybe someone knows a quick fix/workaround?
No idea. Try the latest driver 390.65, maybe it helps?

Quote:
Originally Posted by ryrynz View Post
Madshi, each of the ax files have gained a few MB each, that expected?
Yes. The NGU + RCA fusion algorithms consume a LOT of space, because practically every NGU + RCA strength setting is a complete separate algorithm.

Quote:
Originally Posted by chway View Post
I have two monitors, when I minimize MPC-HC/MPC-BE or PotPlayer (no issue with Zoom Player), the playback pause for a few seconds, and pause again when I restore the player, sometimes the playback never resume and I have to kill the player. I noticed this always and only happen when I use the player on the monitor I place to the right in "Select and rearrange displays", no matter which monitor I select to be the main, no matter which monitor I put to the right. I can minimize/restore just fine on the monitor I place the the left. This happen only when I use MADVR as video renderer.

I'm using Windows 10, but same problem on Windows 7 or 8.1 when I tried a few weeks ago, I tried severals drivers versions, dozens of MADVR versions, dozens of MPC-HC versions... I tried pretty much everything I could setting up in MADVR options, it doesnt help. I notice the problem get worst when I use DXVA, the playback pause much longer when I minimize or restore the player.

The two monitors I have have different resolution:
1) 1920x1080 60hz
2) 1360x768 60hz
Hmmmm... Which GPU are you using? It seems the problem only occurs on the secondary monitor but not on the primary? Is that possible? Or is it really left vs right monitor, and it doesn't matter which monitor is primary and which is secondary?

Quote:
Originally Posted by mclingo View Post
MADSHI – any change you could look at this to see if you could find out what is crashing our machines and causing black screens when resolution is changing for the next release. Obviously its something to do with Windows 10 but I’m hoping there might be a workaround.
This very much sounds like a GPU driver issue. I'm not sure if I want to add weird workarounds that are only needed for one GPU manufacturer, and maybe only one OS and maybe only a couple of driver versions. Since the problem is most likely caused by AMD drivers, what you should try first is contacting AMD customer support to have them fix their bug.

Quote:
Originally Posted by steakhutzeee View Post
I'm on a FullHD monitor 60 hz.

Watching a movie if I press ctrl j I can see that display goes at 59.999(something), and goes up to 60.(something).

Why it's not 'stable'? I'm not expert about this.
madVR measures the refresh rate by constantly reading the GPU VSync scanline position and using a complex mathematical formula to try to calculate the most likely refresh rate. The whole algorithm is not perfectly accurate, but it gets more accurate the longer you wait. So it is expected to fluctuate a lot in the beginning and less and less over time.

Quote:
Originally Posted by walstib View Post
having an issue with madVr used in conjunction with either MPC-BE or JRiver. When I have 3D via nVidia control panel enabled, 3D Playback works fine.

However if I play regular 2D or 4K+HDR content, both MPC-BE and JRiver crash. They work fine if 3D in the nVidia control panel is unchecked.
Try latest 390.65 drivers, maybe it helps. If not, try Sideeffect's suggestion.

Quote:
Originally Posted by jeanlee20xx View Post
I have two projectors,I use potplayer's 3D function play 2d as sbs,than output each side to each projector,but if I use madvr,the potplayer's 3d function will be disabled
I'm not sure what this "play 2d as sbs" does. Are your 2 projectors visible as 2 separate "monitors" in Windows? If so, can't you set Windows to "mirror" mode, so both monitors get the same image?

Quote:
Originally Posted by mclingo View Post
If like me you have mixture of 8 and 10 bit movies, 420 10 bit should be better than 444 8bit for a movies which are actually encoded in 10bit, there should be less colour banding and smoother graduations.
Nope. There's no different in color banding and graduations at all, thanks to dithering. The only difference between 8bit and 10bit output in madVR is that you get a slightly lower noise floor with 10bit.

Quote:
Originally Posted by Georgel View Post
Thank you! This is exactly what I was looking for! That submenu is lovely, but it didn't say what key to use!
That's because the key is defined/chosen by the media player, even though madVR does the actual screenshotting work.

Quote:
Originally Posted by Georgel View Post
@madshi - Amazing work man. That screenshot was the last thing I really needed to make this work the way I wanted it to!!! Loving it!


Quote:
Originally Posted by arrgh View Post
but the second half shows mostly only a black screen; sometimes audio works, sometimes not; sometimes the player hangs after a short time (~30s); but always (whether it hangs or not) the player can only be closed via the Task-Manager;
since the working/not-working files are muxed over several years I would exclude a muxing problem...

here a statistics of such a file... the rendering times are extraordinary
The decoder queue seems to be empty. Does it ever go higher than 1-1/16? If not, the decoder is your problem.

Quote:
Originally Posted by Manni View Post
This might be true with SDR, not with HDR. ST2084 needs at least 10bits in order to not produce any banding, especially in dark/bright areas, although it's more important in the content than for rendering.
You're absolutely right that bitdepth is more important for content than rendering. The reason is lossy encoding which doesn't go well with dithering. I really wish UHD Blu-Ray would have been 12bit (or even 16bit) instead of 10bit!

But for lossless HDMI transport, 8bit is just fine for HDR/ST2084, as long as you use proper dithering. You won't get *ANY* banding, whatsoever! However, you do get a higher noise floor. So if you can do 10bit, by all means do it. I don't think anybody is recommending to avoid 10bit. The original question we were discussing was whether it's better to use 8bit RGB or chroma subsampled 10bit, and there my vote goes to 8bit RGB.

So basically I'd rate:

10bit RGB > 8bit RGB > 10bit 4:2:2 > 10bit 4:2:0

That applies to both HDR and SDR.

Quote:
Originally Posted by Manni View Post
My display is 12bits from end to end (input to panels) and 10bits output produces a better result than 8bits, especially in HDR. This is with my nVidia set to 4K23 RGB Full 12bits 4:4:4, as per my sig below.
I believe some (many?) displays which do 12bit fine are internally dithering. But I haven't seen anyone really test this (not sure if it's even possible reliably), so it's hard to be sure about anything.

Quote:
Originally Posted by Manni View Post
I don't want to dither if I don't have to. So yes, I prefer to keep 10bits output over 12bits GPU for 10bits HDR playback |(except for 60p where the driver drops automatically to 8bits). I don't see why I should add some noise when I don't have to.
I fully agree, as long as you keep dithering on at all times. I know that some users thought that when playing 10bit content with 10bit output, they could turn off dithering. I'm not 100% sure if that's what you're saying here (it's not really clear). I think you probably didn't mean to suggest that dithering could be turned off when using 10bit output. But just to be safe no user gets this wrong, let me make clear that dithering should never be turned off.

Quote:
Originally Posted by Oguignant View Post
Hi guys, question... Is there any way that when Madvr automatically changes to 23hz also change the bit depth to 12 bits?
The OS doesn't allow me to choose the output bitdepth. Maybe I could do it using private Nvidia APIs, but that's not currently implemented/supported by madVR.

Quote:
Originally Posted by edcrfv94 View Post
Look well, very expect NGU can be used at AviSynth also can NGU use for Deinterlacers to replace nnedi3(QTGMC)?
Not yet, but "soon". But then, I've been saying that for a long time now. I hope it will be really soon now. I already have it working, just need to make it available the right way.
madshi is offline   Reply With Quote
Old 15th January 2018, 13:49   #48316  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by petran79 View Post
tearing problem solved for the time being.
Solution? Had to enter Nvidia Control panel Manage 3D applications and choose "Restore Defaults". Thing is I hadnt tampered with anything at all. Had to reinstall drivers because all of a sudden Nvidia Control Panel would not launch.

Windows 10 and Nvidia are even more screwed up than in the 90s....
Tell me about it. I've been recommending Windows 8.1 for a loooong time.

Quote:
Originally Posted by janos666 View Post
@madshi --- I am not sure if you are interested in this but just in case you ever consider implementing BFI for 24fps video on 120Hz display in madVR, this might saves you some annoying moments figuring out this trick I just discovered
BFI requires a really high refresh rate, and most TVs can't do that. Sure, gaming monitors might, but I don't have such to test with. So for now, it's not a really very interesting topic for me. Maybe it will become more interesting once we get OLED HDMI 2.1 TVs with 120Hz support.

Quote:
Originally Posted by Manni View Post
Thanks Madshi, that's great news for HDR. Do you know if they also fixed the bit depth issue with a custom res (not possible to select 12bits when a custom res is selected) in any driver after 385.28? 12bits can be applied if you switch to a standard frame rate, but it's greyed out with the 23p custom res (displayed as 24p in the nVidia panel). This is one of the reasons why I'm still with 385.28, as it's the last version that fully works.
I'm not aware of any changes to custom res. I've reported all kinds of custom res issues to my Nvidia contact, but he didn't get around looking at them yet. Fixing HDR was higher priority.

Quote:
Originally Posted by nussman View Post
is D3D11 DXVA Deinterlacing still on your to do list?
Yes.

Quote:
Originally Posted by x7007 View Post
For some reason I don't have the white dots anymore even with compression disabled...

Not sure where the issue came from... so many weird issues.

EDIT : ha nope, they are back

https://imgur.com/aIdtLu9

Anyone knows what the issue causing this thing ? it worked fine SOMEHOW and I didn't change anything.

Setting the Black Level from LOW to HIGH make the white dots disappear but ruins the picture .

Setting in MadVR black level instead 0-255 to 16-235 fix the issue also but also ruins the picture... so it's something with MadVR can't show properly HDR.

Or maybe it's just the compression of the movies ? but how come no one see them too.. maybe no one is using MadVR and using 16-235 ? or Black Level High ? because there is no other way.

In movie Fifty.Shades.Darker.2017 I don't have white dots at all. so could this be the movies specifically ?
Which GPU? Are you letting madVR pass the HDR content through to the display "untouched"? Which is your madVR HDR configuration? "let madVR decide"? Do you have the OS "HDR and Advanced Color" switch on or off?

Very important: Do these white dots show up in screenshots? Try disabling FSE mode, do the white dots still show up? If so, press the PrintScreen key and check if the screenshot in the clipboard contains the white does. It doesn't matter that the screenshot will have washed out colors. Only the white dots matter.

Usually, random white dots are caused by one of these things:

1) Either faulty GPU RAM (overclocked?).
2) Or faulty GPU chip (overclocked?).
3) Or faulty GPU drivers.
4) Or faulty decoding (hardware?).
5) Or faulty HDMI cable or HDMI output/input ports/chips.

It's unlikely to be madVR's fault, unless many many users have exactly the same issue. If only a small part of the users have this problem, it's most likely one of the 5 reasons above.

Quote:
Originally Posted by UMNiK View Post
Hey, I was just wondering what the "best" (as in using all of madvr's features and with minimal load) way to use lav video's new d3d11 decoding was: is it leave it on auto and native (will lav then dither and colour convert?..) or choose a gpu for copyback? I'm assuming "use d3d11 for presentation" is mandatory in madvr either way.
If you leave it on auto/native, you won't get deinterlacing in madVR. Not sure if that's a problem for you or not. If it it, you can choose a GPU for copyback, which will enable deinterlacing, but lower performance.

"use d3d11 for presentation" is optional in any case.

Quote:
Originally Posted by roninf View Post
does someone know if i can tell madvr to use a certain 3Dlut when a file is played in 3D?
if (3D) "3D Profile" else "2D Profile"

Quote:
Originally Posted by Manni View Post
It would be great if MadVR could do this, but for some reason Madshi doesn't seem to be willing to go all the way for 3D support with nVidia. He has provided registry settings that help momentarily but don't stick.
Last time I checked, running that registry settings I provided sticked for me just fine. But I suppose I can retest. It's hard to develop something if I can't reproduce a specific problem.

Quote:
Originally Posted by Yoshi View Post
Considering that video signals are essentially the same as audio signals, I wonder why in practice it seems to be such a challenge to get it right in the video domain. I might be corrected but I think when it comes to audio, taking any voodoo/audiophile arguments aside, filters which leave the usable bandwidth quite intact while suppressing any other range which could cause aliasing are rather easily achievable using cheap and decent technics nowadays.
When talking about simple linear filters, what I said back then still applies. However, using non-linear algorithms can change everything. E.g. test my relatively new NGU upscaling algorithm: It's very sharp, has no ringing and aliasing, even removes some aliasing and ringing from the source.

However, if you consider that most 1080p Blu-Rays these days are downscaled by the studio from a 4K master, we can consider upscaling as trying to undo the studio downscale to restore the original 4K master. A studio downscale actively destroys information, though. So it's impossible to restore the original 4K master perfectly. However, NGU is able to restore sharp edges very well. But the main thing that gets lost is texture detail, and there's no mathematical way to restore it because the information is simply missing from the downscaled image.

In theory it should be possible to build a large database of typical real world textures and try to restore lost texture detail by intelligently interpreting the downscaled image and injecting stored texture detail from the database into the image. (Actually, that's what some neural network algorithms already do.) But this approach has many very big problems, which make it currently impossible to use for video upscaling.

Quote:
Originally Posted by Yoshi View Post
Wouldn't it be possible to reproduce any color space and any gamut and any brightness and gradient with any bitdepth, be it one bit only given ideal dither implementation? The only drawback of lower bitdepth should be a higher noise floor, effectively preventing darker tones to be resolved as they would be masked by the dither and quantization noise of course. Hence I'm not saying that the bitdepth is irrelevant but I'd claim that any color or tonal variation should be able to effectively displayed with whatever bitdepth down to the noise floor at least.
I mostly agree. However, very high noise floors are so distracting that it's really not fun to watch, anymore. It's easy to test with madVR: In the "devices\yourDisplay\properties" you can set the "native display bitdepth" to any bitdepth you like, even 1 bit, and then check what madVR's dithering algorithms do with that.

Quote:
Originally Posted by Yoshi View Post
Well, at least from an admittedly quite theoretical point of view, the signal theory would dictate you to do so because the individual frames of a movie are snapshots or samples according to Nyquist/Shannon which - just as pixels - were never supposed to be shown separately but reconstructed as a continuous movement.

Most people would say that the higher a frame rate, the smoother the movie and movement of a movie will become. Theoretically, that's not true: the only thing which changes is the nyquist frequency. In that case: what speed of movement can be reconstructed without aliasing. Anything else, including the frame-by-frame display of a movie - strictly speaking - is wrong to begin with.
I don't think signal theory applies to motion interpolation - unless you want to apply simple linear filters to "neighbor" frames. But that won't give good results. Perfect motion interpolation would look at 2 video frames from 2 different points in time (e.g. time X and time X + 1 second), and then create a new frame which is e.g. located at time X + 0.5 seconds. The new frame should ideally look identical to what a video camera would have recorded at X + 0.5 seconds. Using linear algorithms (which signal theory can cover) getting even remotely near to an ideal interpolated frame is impossible. To achieve perfect results, you need to intelligently interpret the image, understand which "objects" are in the image, which objects move into which direction, and then for the interpolated frames you have to carefully draw each object in exactly the place it's supposed to be. But even with the most intelligent algorithm, there can be unsolvable problems: E.g. if an object covers a part of the background, and then for the interpolated image you try to restore the original background, you might make a mistake if something in the background has moved as well (but was covered by a foreground object before).

Basically, for both upscaling and image interpolation, to get best results, you need to use non-linear algorithms, and then signal theory no longer applies.

Quote:
Originally Posted by Yoshi View Post
I recognized that when making a screenshot of a movie when using madVR and dithering to 1-bit only, while error diffusion dithering results in images containing only up to 8 colors just as to be expected, with random dithering, several thousand colors are counted and I wonder why. Apparently, some fusion between neighbour pixels takes place, raising the amount of colors, but why?
Good question, might be a bug in the random dithering algorithm, I'm not sure right now.

Quote:
Originally Posted by mytbyte View Post
@asmodian: there is mediainfo report on the file several posts earlier, all seems correct, and still, MadVR fails to read the metadata
madVR relies on what the splitter/decoder reports, especially when the content is HEVC. So the key question is why the splitter/decoder isn't reporting HDR to madVR.

Quote:
Originally Posted by Goeggel View Post
@mytbyte and brazen1:
would it help if you could have a look at one of the movie files yourself? I could cut out the first couple of seconds from the movie and upload it somewhere (any suggestion for a suitable location?)?
It would make sense to do that. But you can also try to find out for yourself why the splitter doesn't seem to report HDR to madVR. Which splitter are you using?
madshi is offline   Reply With Quote
Old 15th January 2018, 13:50   #48317  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Ben_Nicholls View Post
With the exception of a few incredibly slow scripts (like MSRmod) NGU seems to do the job for me. Is there some simple resize function out there that could be made to use NGU in Avisynth? Have it working in madVR but no idea how to load it into Avisynth..... my 780Ti is just about powerful enough on the simplest settings (saving up for a 1070 Ti hoping in combination with an i7 4930K@4.2GHz and 16GB 2133MHz I should be able to crank up the settings a bit more).

Would be really nice however to be able to render videos in 1440p/2160p to watch on lower power devices like my tablet, phone and TV.
Avisynth support is coming "soon", but it's not there yet.

Quote:
Originally Posted by mytbyte View Post
I'm using HDR->SDR conversion via MadVR to a measured 180 nit monitor with cca 2.2 gamma...I have measured the post-MadVR gamma curve for it with a colorimeter, with "compress highlights" turned on...comparing it to the BT2084 curve, I find that the rolloff starts way too early, as low as 30% (10 nits) input while it should start at around 50% (or around 100 nits) since that's what color grading aims at as diffuse white. I know 200 nits peak brightness of an average monitor is not exactly HDR, but what I propose, there would still be 100 nits to do the rolloff and achieve more of the HDR effect...just a note: changing current options except turning off "compress highlights" made no difference...

I'd like to encourage the Great Madshi to look into this and hopefully adapt the behaviour of the conversion. Thanks.
madVR follows SMPTE 2390 recommendations. Of course it would be possible to offer an alternative tone mapping algorithm. But do you actually dislike the image madVR's tone mapping produces? Or is this more a theoretical complaint?

Quote:
Originally Posted by Plutotype View Post
Did some comparison between BD of Dunkirk and UHD BD of Dunkirk with madvr SDR to HDR conversion at 226, 255 and 280 nit setting. Compressing the HDR highlights was set to OFF.

https://drive.google.com/file/d/1d1e...ew?usp=sharing

The nit setting is a kind of tricky, because SDR displays are calibrated to 100-120nit, but here you can see, that going below 200 nit setting would clip the bright highlights unnaturally. In this shot, nit setting around 250-260 generates very close result to SDR blu-ray.
Yes, but it's not the same with every UHD Blu-Ray. With some you need higher than 200 nits, with some lower, to get similar results as the SDR Blu-Ray. I've chosen 200 nits as a somewhat "average" value of what UHD Blu-Rays need to get reasonably close to SDR Blu-Rays.

Quote:
Originally Posted by Magik Mark View Post
Is there a way to save the refresh rate optimized values only?

I do refresh of madvr setting from time to time and the optimized values get refreshed as well. I would like to prevent this from happening
madVR stores display mode optimization data in HKEY_CURRENT_USER\Software\madshi\madVR. Furthermore, the GPU driver itself stores some data, too. If you want to preserve as much as possible, make sure you store the madVR registry data.

Quote:
Originally Posted by Sm3n View Post
Nope, no clean install or DDU but adaptive and vsync is activated. Do you think I should properly reinstall?

There is one thing I'm not 100% certain I'm doing well is the first step during the "define timing parameters" selection.
EDID is recommanded in both the tutorial and the window but I noticed in some case I have to select a CVT v1, v2 or CRT. Does it matter? I think that depending of what I choose, later I can or can't switch the "optimized pixel clock" mode.
Unfortunately the Nvidia GPU driver is currently very buggy with custom modes. I've reported several issues and hope for fixes, but it could take a while.


August 2017:
Quote:
Originally Posted by foozoor View Post
- Bjin's RAVU prescaler/doubler is easily 4x faster (test/ravu-r3-smoothtest1.hook) than NGU-AA High with same/better result.
November 2017:
Quote:
Originally Posted by foozoor View Post
It seems that FSRCNNX is better than NGU Standard/Sharp now.
January 2018:
Quote:
Originally Posted by foozoor View Post
It seems that igv finally succeeded in doing better than ngu sharp with fsrcnnx.
Hmmmm... I smell a pattern here. Unfortunately posting the same incorrect information repeatedly doesn't improve your credibility. I've compared NGU Sharp with the latest FSRCNNX-32, and with the majority of my test images, NGU Sharp Very High quality still looks better (noticeably so in some cases), and is more than twice as fast. FSRCNNX has improved, though.

Quote:
Originally Posted by brazen1 View Post
one persistent problem remains. Even though stereoscopic is enabled in NCP and W10 display properties, after playback of HDR, stereoscopic is disabled. Stereoscopic must be enabled again before 3D playback.
Will check if I can reproduce that here.

Quote:
Originally Posted by Rick164 View Post
Tried it again after reboot and TV reports HDR but madVR doesn't show it in the top left corner
That is the perfect result, exactly as it should be. madVR only shows "(HDR)" in the top left corner if the OS "HDR and Advanced Color" switch is turned on. I'll change that in the next build to avoid confusion.

Quote:
Originally Posted by Rick164 View Post
color space gets converted to DCI-P3 as in below screenshot.
Nope, that just means that the HDR Metadata says that the movie was mastered with a DCI-P3 display, it doesn't mean madVR is doing any conversion.

Quote:
Originally Posted by Manni View Post
I also had to use a different timing set, as the usual that would give me 50mn between drops made no difference. But after an optimize of one of the recommended sets, I got similar or better results than before.
So optimization worked this time? IIRC last time you got no sync at all with any of the optimized modes?

Quote:
Originally Posted by Manni View Post
I have one request (not new but issue is still there): once I have a 4K23 custom res optimized, I have near perfect playback for 99% of my content as I have almost nothing at 50p or 60p (have to try optimize these, not had the time to do so yet).

But 3D (mk3D) still gets a frame drop every 2 minutes or so, which is barely acceptable. Is there any way to optimize separately 1080p23FP, so that we can optimize 3D playback and get decent frame drops?
It's out of my control, unfortunately. There are 2 possible solutions:

1) I've reported to my Nvidia driver dev contact that Nvidia GPUs generally have a bad pixel clock / refresh rate for any x/1.001 modes. I hope that he can get this fixed. That way even without any custom modes we should get much better refresh rates, which should also help for 3D. But I don't know if this will get fixed, or how quickly.

2) You could try writing down the optimized timing mode data and create an EDID override based on that, using CRU. I've no idea if that will also affect 3D, though. Maybe an EDID override can also have an extra entry for 3D? I don't really know, to be honest.

Quote:
Originally Posted by mrmarioman View Post
HDR still not working properly here with new Nvidia drivers.
Now it doesn't turn on the Windows' HDR toggle, but I'm still not getting the right colors. With my Samsung TV, if I choose the colour space 'native', it gives me something that aproximates what it should look like, but it's not correct. You have to select 'auto' in the colour space option to get the correct HDR colours. With 'auto' it looks 'kinda' washed out, or better described: it looks like SDR. In contrast, the desktop looks super saturated and vibrant with colours. But it must be something about my hardware since nobody seems to have any trouble here. oh well.
I rather suspect some configuration issue. Some TVs can require you to do crazy things to get things displayed correctly. Could also be a GPU driver issue with e.g. sending out incorrect levels (TV instead of PC or vice versa) or even YCbCr instead of RGB. Unfortunately this is almost impossible to diagnose from afar. So you'll have to try all kinds of different settings in your GPU control panel and TV yourself to get it solved.

Quote:
Originally Posted by Vegekou View Post
I have a problem with madVR+MPC—HC64 https://streamable.com/ar0cl Anybody can help me to solve it or it’s just a bug?
Intel GPU? Probably a GPU driver bug. Try using "automatic fullscreen exclusive mode".

Quote:
Originally Posted by ShyK View Post
Hi, madshi. madvr currently has two gamma correction problems.

Please consider adding gamma correction / "linear light" to all upscaling modes. It's really a basic thing that should be available for upscaling same as it is available for downscaling.

The Jinc interpolation with the anti-ringing filter is an exceptional solution for natural-looking results with very highly reduced "aliasing" and a relatively very low amount of processing power. The problem is that in upscaling, it's largely ruined by the lack of gamma correction.

For comparison, here are upscaling results with similar, catmull-rom filters:

original image

madvr-x2-catrom-sigmoidal-off
madvr-x2-catrom-sigmoidal-on
imageworsener-x2-catrom

Sigmoidal light even makes the gamma-related "artifacts" worse.

Currently, only NGU has gamma correction for upscaling, so I'm comparing with NGU Soft, luma doubling and chroma "very high", quadrupling disabled.

madvr-x2-ngusoft

This is the same effect as what happens in downscaling, but it's much more disturbing in upscaling, because...it's upscaled. Since ImageWorsener, ImageMagick, GIMP and a few other programs give better results that are similar to each other, then I guess it means madvr just has a different, more "lenient" gamma correction implementation, but it's not better nor as good.


Example with downscaling:
madvr-x0.5-catrom
madvr-x0.5-catrom-nocorrection
imageworsener-x0.5-catrom
imageworsener-x0.5-catrom-nocorrection

Without gamma correction, pretty much identical. With gamma correction, as mentioned above, the "artifacts" are more visible in madvr.

I'll just mention that the problems are not limited to this single image. I've compared using other images, and madvr's gamma correction is always worse. I'm just using this test image because it's the best for easily showcasing the problems. I'm sure you could easily improve the gamma correction and enable it for upscaling as well, and thus make madvr not just the highest quality video renderer, but also the best video and image scaling software available. On that note, I'll just add that a custom width/height box in the screenshots menu would be very useful.
I've had linear light upscaling in madVR for a long time, but nobody used it because it simply didn't look good with real world material. It produces too strong ringing artifacts and some more aliasing, too. When I removed the linear light upscaling option, there was not a single user who complained about that. That goes to show that it was really useless. Usually, if I remove something, no matter how important, at least 3 people complain.

Quote:
Originally Posted by psyside View Post
Well i tried everything, and there is basically 0, not 5% just 0 difference in IQ with quite high settings in madVR MPHC vs VMR 9 Renderless HQ
For downscaling try SSIM1D 100, or Bicubic 150. For upscaling try NGU Anti-Alias (for very low res aliased sources) or NGU Sharp (for higher quality sources). The different in image quality will be larger if your source has sharp lines/features. With blurry sources you'll barely see a difference.
madshi is offline   Reply With Quote
Old 15th January 2018, 14:50   #48318  |  Link
fluffy01
Registered User
 
Join Date: Dec 2012
Posts: 52
I think I asked this before, but I can't remember if I actually did, or I just thought about it, so please forgive me, if I am repeating a previous query.

Is it possible to implement support for other 3D input formats than just MVC? I mean, when I convert my 3D blurays, I usually use a SBS or OU format, and it would be nice, if I was able to make madVR go into "3D-mode" for these, so I get all the same features with that as with "real" MVC-movies, instead of just playing them as 2D and then manually switching my projector to 3D.

Optimally, with will read the stereoscopic metadata from LAVFilters, and automatically detect the input format based on this and put madVR into 3D mode, when needed, and besides that, make a keyboard shortcut to manually tell madVR the input format of the video.
fluffy01 is offline   Reply With Quote
Old 15th January 2018, 15:12   #48319  |  Link
mclingo
Registered User
 
Join Date: Aug 2016
Posts: 1,348
would it be better to rip to MKV MVC full frame to maximise quality, you'll lose some vertical or horizontal res that way. The advantage being smaller files that will play on anything system however.
mclingo is offline   Reply With Quote
Old 15th January 2018, 15:28   #48320  |  Link
heiseikiseki
Registered User
 
Join Date: Jan 2015
Posts: 37
Would the algorithm of madshi can be a a plugin or a DLL library or something for other Picture viewer or Manga viewer to use?

Seems most of photo viewer is still using poor resize algorithm like bicubic or lanczos.

But NGU or even JINCS is much better than those resizer.

I'd like to enjoy these amazing technologies to view my photo and manga or picture in the future. Is it possible?
heiseikiseki is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 05:02.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.