Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 12th June 2015, 09:51   #30961  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,342
CPUs are extremely bad at image processing, it wouldn't be that much of a help. If you wanted to, you could pre-scale the image on the CPU using ffdshow/AviSynth or something, instead of letting madVR do it, but personally I don't think its really worth it.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is online now   Reply With Quote
Old 12th June 2015, 10:23   #30962  |  Link
Braum
Registered User
 
Join Date: Jan 2015
Posts: 48
Quote:
Originally Posted by nevcairiel View Post
CPUs are extremely bad at image processing, it wouldn't be that much of a help. If you wanted to, you could pre-scale the image on the CPU using ffdshow/AviSynth or something, instead of letting madVR do it, but personally I don't think its really worth it.
Okay, thank you for this explanation I'll stick with Madvr.
Braum is offline   Reply With Quote
Old 12th June 2015, 16:09   #30963  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,559
Quote:
Originally Posted by Braum View Post
Will it be possible, in the future, to share Madvr processing load between gpu and cpu ?

I got a 7870XT and a i5 3570K@4.5ghz and quickly reach the limits of my gpu, it's a bit frustrating to see my cpu doing practically nothing.
CPU won't do much for this kind of work, but you can combine with SVP which works mostly on the CPU to increase frame rate to 60fps. SVP then runs on the CPU while madVR runs on the GPU. SVP can also downscale the image to your monitor's resolution.

Last edited by MysteryX; 24th June 2015 at 06:01.
MysteryX is offline   Reply With Quote
Old 12th June 2015, 16:46   #30964  |  Link
godshades
Registered User
 
Join Date: Jun 2015
Posts: 1
hi, im new here. Can any1 use vga GT 650m with GDDR3 (Asus N56VZ) tell me what best setting for madvr ?
Or how can i find the best setting, just try each setting and check render time ?
godshades is offline   Reply With Quote
Old 12th June 2015, 20:02   #30965  |  Link
ladersu
Registered User
 
Join Date: Feb 2014
Posts: 2
madshi, if a settings.bin is corrupt, madVR is not able to read any settings from it? What can be done to restore?
ladersu is offline   Reply With Quote
Old 12th June 2015, 20:16   #30966  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,903
Quote:
Originally Posted by godshades View Post
hi, im new here. Can any1 use vga GT 650m with GDDR3 (Asus N56VZ) tell me what best setting for madvr ?
Or how can i find the best setting, just try each setting and check render time ?
choice what looks best to your eyes.
huhn is offline   Reply With Quote
Old 12th June 2015, 23:03   #30967  |  Link
AngelGraves13
Registered User
 
Join Date: Dec 2010
Posts: 254
Will it be possible to one day get noise reduction in madvr? Mostly the luminance noise is what I'm referring to.

Last edited by AngelGraves13; 13th June 2015 at 00:35.
AngelGraves13 is offline   Reply With Quote
Old 13th June 2015, 00:56   #30968  |  Link
blindbox
Registered User
 
Join Date: Jun 2010
Posts: 15
Quote:
Originally Posted by Barnahadnagy View Post
Basically, it would require MadVR to deliver frames when they are needed to be displayed. This works for games (frame rendered then displayed instantly), but MadVR has a timing for the frame. However, MadVR cannot guarantee to send the frame when this time comes (No CPU time, and/or simply not precise enough timers).

This is why we have queues in FSE (apart from more stability), to let the GPU handle the presenting times of frames by using a HW circuit.

TL;DR: You need to present frames exactly when they need to be presented, with no queue. Current systems can't do this.
What about the fact that FreeSync and G-Sync purely relies on the frames that are about to appear on the display? The only knowledge that G-Sync and FreeSync has is that of the cables saying 'hey guys I'm sending you another frame!' (some may say FreeSync isn't doing this, but I'm pretty sure it does, given that DX's framerate reporting isn't entirely accurate).

I still don't understand why any under-the-hood timing is relevant when all F/G-Sync care about is what's coming out of the cable port currently.
blindbox is offline   Reply With Quote
Old 13th June 2015, 01:18   #30969  |  Link
webs0r
Registered User
 
Join Date: Jun 2007
Posts: 68
Quote:
Originally Posted by blindbox View Post
What about the fact that FreeSync and G-Sync purely relies on the frames that are about to appear on the display? The only knowledge that G-Sync and FreeSync has is that of the cables saying 'hey guys I'm sending you another frame!' (some may say FreeSync isn't doing this, but I'm pretty sure it does, given that DX's framerate reporting isn't entirely accurate).

I still don't understand why any under-the-hood timing is relevant when all F/G-Sync care about is what's coming out of the cable port currently.
Look at your madVR render time.
Let's say its hovering around 10ms per frame, and madVR just sent it "down the cable" with no timing mechanism. Your video will play back at 100 frames per second.
This won't be good for a movie shot at 24fps. Would play 4x too fast.

Gsync would work well in that, well, the monitor would show a clean frame for those 100 frames in that second, no matter on tiny madVR variations... e.g. one of the frames might go out at 11ms, the other at 9ms.

But the movie would be playing back like a Benny Hill intro. Not to mention you will have random judder from the timing jitter.

Something has to match the presentation to exactly 24fps. A precise mechanism needs to be in place for that. Gsync does not provide for, or help, that mechanism.

Watching video is not a game - you don't want to just push out frames as soon as they are ready.
webs0r is offline   Reply With Quote
Old 13th June 2015, 07:36   #30970  |  Link
agustin9
Registered User
 
Join Date: Aug 2008
Posts: 86
Quote:
Originally Posted by Siso View Post
An odd thing madvr causing big dpc latencies, I've tested with evr-cp - no latency problems

madvr


evr-cp
Which gpu do you have?
agustin9 is offline   Reply With Quote
Old 13th June 2015, 10:21   #30971  |  Link
Siso
Soul Seeker
 
Siso's Avatar
 
Join Date: Sep 2013
Posts: 711
Quote:
Originally Posted by agustin9 View Post
Which gpu do you have?
GTX 550 Ti
Siso is offline   Reply With Quote
Old 13th June 2015, 10:52   #30972  |  Link
blindbox
Registered User
 
Join Date: Jun 2010
Posts: 15
Quote:
Originally Posted by webs0r View Post
Look at your madVR render time.
Let's say its hovering around 10ms per frame, and madVR just sent it "down the cable" with no timing mechanism. Your video will play back at 100 frames per second.
This won't be good for a movie shot at 24fps. Would play 4x too fast.

No, what I'm asking is that you send frames from madvr at 48 fps with frame doubling for a 24fps video and let G/F-Sync handle the rest. I never asked for madvr to send as much frames as possible.



Gsync would work well in that, well, the monitor would show a clean frame for those 100 frames in that second, no matter on tiny madVR variations... e.g. one of the frames might go out at 11ms, the other at 9ms.

This is not how G-Sync works. G/F-Sync works by measuring the time the client's frames reaches the end of the GPU pipeline, and then change the monitor refresh rate to make sure it looks fluid. Of course, this is an oversimplification. There's obviously some buffer and timing wizardry involved. But changing the monitor refresh rate on the fly is essential to this tech.


But the movie would be playing back like a Benny Hill intro. Not to mention you will have random judder from the timing jitter.

Like I said, G/F-Sync will handle that. All you have to do is give 48 fps to the graphics card. Or rather, make sure it's 48 fps at the end of the pipeline.

Something has to match the presentation to exactly 24fps. A precise mechanism needs to be in place for that. Gsync does not provide for, or help, that mechanism.

Okay, let madVR make sure the presentation is exactly 24 fps. However, when it comes to the refresh rate of the monitor, let MadVr assume the refresh rate is 48 Hz, and let G/F-Sync handle the rest of the anti-tearing. And they do this by changing the refresh rate of the video to 48 Hz or any arbitrary Hz that it supports.

Watching video is not a game - you don't want to just push out frames as soon as they are ready.

Yeah, okay. How about we push 48 fps (interestingly, in a lot of engines out there, you can limit the engine fps) into the GPU, and let the G/F-Sync handle the rest? We don't have to make it act like a game.
My answers in green above

From what I understood on how G/F-SYNC work, this is how it goes in chronological order, if MadVr is to implement G/F-Sync without making any changes.

1) MadVr checks the display's refresh rate. If it's not a nice multiple of the video fps, it will shift around the frametime of frames (aka pulldown effect).
2) MadVR sends a frame in its buffer. MadVR has a precision timer, so it sends this frame at 60 fps with very small variation.
3) G/F-Sync detects the time taken for the frame to finally reach the end of the GPU pipeline and put it in their buffer with a timestamp
4) G/F-Sync introduces a tiny delay and tells the monitor to change its refresh rate ASAP (there's perhaps a tiny overdriving circuit that forces the display to refresh immediately.. oh hey, it's called the VBLANK) and G/F-Sync does this by looking at the frame history i.e. at how fast it was put into the frame buffer before it was sent to the display. Anything that requires overcompensation, VBLANK it.
5) G/F-Sync finishes its job.

So all I'm asking here is that we remove that display's refresh rate check which effectively disables pulldown, and send the video down the GPU pipeline at 48 fps by simply doubling each frame, and let G/F-Sync handle the rest. And they do that by changing the refresh rate of the video on the fly.

Now let's see how it goes for games.

1)Games sends out as many frames as it wants no matter what
2) G/F-Sync detects the time taken for the frame to finally reach the end of the GPU pipeline and put it in their buffer with a timestamp
3) G/F-Sync said 'okay, this frame can go, that frame can go into my buffer... or maybe I'll drop it'
4) G/F-Sync detects wild variations in frame rate, and goes 'oh crap, change the refresh rate QUICK QUICK QUICK)... and G/F-Sync does this by looking at the frame history i.e. at how fast it was put into the frame buffer before it was sent to the display. If it requires overcompensation, VBLANK it.
5) G/F-Sync finishes its job.

Did I get this right? I don't think madVR is able to bypass the driver layer like DX12/Mantle can. Even then, DX12/mantle is still an abstraction and AMD/Nvidia can still put G/F-Sync at the end of the chain. They still have to go into the driver stack and into the driver's personal queue. Since madVR is unable to bypass this, G/F-Sync can still work.

If it works for games, and especially when the variation is very intensive in games, why wouldn't it work for videos? FPS too low? Right now MadVR works by V-SYNCing to the display. Hence you see some frames doing 12 ms, and some frames doing 9 ms to avoid tearing. Disable that v-sync, send that video at 48 FPS by frame duplication, give not a single crap about tearing, let F/G-Sync handle the rest.

I think the main take away is that, G/F-Sync allows variable refresh rate. What does that mean? You make sure the GPU outputs 48 FPS, G/F-Sync will then set the display to be at 48 Hz. Why do I keep mentioning 48 FPS? It's double the magic video fps of 24 fps, meaning that you can just double the frames.

Now tell me why wouldn't simply shoving a doubled 24 fps video at 48 fps and then letting G/F-Sync auto-set the display to 48 Hz work?

Pardon my tone. Also, this is why I wanted a more serious discussion on G/F-Sync in this thread. All the posts before this hasn't been as detailed as to what I posted. Even this one is perhaps not detailed enough. I didn't go into the greater details on why there's no tearing but the keywords are VBLANK and variable refresh rate. Variable refresh rate allows the monitor's refresh rate to be set to anything between 40~60 fps (IPS monitors are supporting FreeSync now!). VBLANK for forcing a refresh to compensate for any sudden changes.

I realize that there's a ton in this text of me simply screaming variable refresh rate and VBLANK in a hundred different ways. That's on purpose. I'm trying to get the idea across that all you have to do with G/F-Sync, is assume that monitor is a 48 Hz monitor, convert the video from 24 fps to 48 fps by duplicating the frames, and then G/F-Sync will change the monitor refresh rate to 48 Hz to suit your video. I may have also been contradictory on some parts. If you would please point that out, I'll be glad. I'm kinda bad at presenting my ideas.

Last edited by blindbox; 13th June 2015 at 11:35.
blindbox is offline   Reply With Quote
Old 13th June 2015, 10:57   #30973  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,342
madshi has explicitly asked for discussions around G-Sync/FreeSync to not be happening in this thread anymore at this time, so I suggest you respect his wishes, and if you want to continue discussing it, open your own thread.
All the technical points of why its more complicated than you seem to believe have already been discussed to death, so i suggest you actually read them, too.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is online now   Reply With Quote
Old 13th June 2015, 11:23   #30974  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Siso View Post
An odd thing madvr causing big dpc latencies, I've tested with evr-cp - no latency problems
I remember that problems with DPC latencies have been discussed in this thread a long time ago. I don't remember the details, anymore. Maybe you can do a search to find the old posts? In any case, madVR does not directly talk to any drivers, or do any funny stuff, so I'm not sure how madVR itself could cause DPC latencies. So I also wouldn't know how to do anything about it.

What happens if you play a GPU heavy game? Does that also up the DPC latencies in a similar way?

Quote:
Originally Posted by iSunrise View Post
madshi, I found a bug with 0.88.11 that should be pretty easy to fix.

1) Go to image doubling
2) Enable double luma and chroma resolution
3) Use a scaling factor of at least 2.0x in your media player
4) Now, if you enable quadruple luma, the picture suddenly has a green tint over it. It goes away again if you either disable quadruple luma or if you additionally enable quadruple chroma.
I can't reproduce it, unfortunately. Can you please start with madVR default settings, then try if you can still reproduce it with those? If you can, please create an issue on the bug tracker, with a download link to your madVR settings, and a full list of your hardware, OS and a screenshot with the Ctrl+J OSD on.

Quote:
Originally Posted by omarank View Post
1. On
2. Yes “repair” at 1.0 looks good. I didn’t see it doing any harm to the images
3. I like mode 3 much better than mode 1 and 2. Between mode 1 and mode 2, I can’t decide which one is better
4. The “thinning” parameter makes so slight changes that are hard to make out. While I was playing with this parameter, I settled on 0.0032. Below are my preferences for the presets:
Low preset: “strength” 1.5, Medium preset: “strength” 2.0, High preset: “strength” 2.3

Whether FineSharp is selected in image enhancements or upscaling refinements, my preferences are the same.
This is exactly the kind of feedback I'm looking for - thanks!

One question: With debanding there were often some thresholds where increasing one parameter by 0.1 didn't do much, but increasing 0.2 suddenly brought a big change. I think this is not the case with FineSharp, correct? A small change in the FineSharp parameters causes a small change in image quality, right? I suppose if I decided to use 1.4/2.0/2.4 that wouldn't bother you much? Or is there something "magic" about your choices of 1.5/2.0/2.3? Not that I'm planning to use 1.4/2.0/2.4, just asking how "fixed" your suggestions are.

Quote:
Originally Posted by godshades View Post
hi, im new here. Can any1 use vga GT 650m with GDDR3 (Asus N56VZ) tell me what best setting for madvr ?
Or how can i find the best setting, just try each setting and check render time ?
Start with default settings. If it runs smoothly, try different algorithm to see if you like them better or not. Luma/image is more important than chroma. Jinc is nice, if your GPU can handle it. Try smooth motion FRC, if your display can't do 24Hz properly.

Quote:
Originally Posted by ladersu View Post
madshi, if a settings.bin is corrupt, madVR is not able to read any settings from it? What can be done to restore?
Try deleting the file, then madVR should read the settings from registry instead. If the settings in the registry are *also* corrupt, your only choice is to restore to default settings.

Quote:
Originally Posted by AngelGraves13 View Post
Will it be possible to one day get noise reduction in madvr? Mostly the luminance noise is what I'm referring to.
One day? Probably.

-------

Based on FineSharp feedback so far I've decided to always enable "linear light", use "mode 3" and set "repair" to the max value of 1.0 (unless several users come with objections). I'd still like to get more feedback from more users on the "strength" and "thinning" options. Which combination of those settings would you guys suggest for "low", "medium" and "high" presets?

Thanks!
madshi is offline   Reply With Quote
Old 13th June 2015, 11:39   #30975  |  Link
blindbox
Registered User
 
Join Date: Jun 2010
Posts: 15
Quote:
Originally Posted by nevcairiel View Post
madshi has explicitly asked for discussions around G-Sync/FreeSync to not be happening in this thread anymore at this time, so I suggest you respect his wishes, and if you want to continue discussing it, open your own thread.
All the technical points of why its more complicated than you seem to believe have already been discussed to death, so i suggest you actually read them, too.
I hope you can point out those discussions, as I've read all of them and I couldn't find one that properly answers this query.

I understand. If the reason why he's not doing it is because it's DisplayPort only (FreeSync is coming to HDMI, btw), then I won't pursue this further.

I first thought the reason why F/G-Sync is not considered is because of the lack of understanding of the methodology of both technologies (that is, by changing the refresh rate of the display on the fly, and making sure it matches the frametimes by some wizardy).

It's just that I thought having the capability of setting my refresh rate to whatever I want with whatever fps the video is playing at, is a match made in heaven.

Again, I have said, I have read every post on F/G-Sync on this thread. None of it answers my query. I took away an edit of mine when I said Asmodian tested G-Sync and it didn't work as expected because madVR is detecting the wrong Hz (at first I understood it as 48 Hz G-Sync as different compared to a 48 Hz monitor). Then he also said something similar to what I said - the fact that madVr tries to adjust the frametime of each frame based on the current refresh rate is screwing G-Sync up. What if madVr simply assumes that the monitor's Hz is 48? Asmodian asked the same thing but never got his reply.

Quote:
Originally Posted by Asmodian View Post
It seems like it would work well if madVR simply assumed the refresh rate was whatever frame rate it wanted to render at but I really have no idea.


That's all from me. Not supporting it because it only supports Displayport and a small subset of monitors is a valid reason. I hope that's the reason, instead of a misunderstanding of the underlying tech.

Last edited by blindbox; 13th June 2015 at 11:56.
blindbox is offline   Reply With Quote
Old 13th June 2015, 13:10   #30976  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by blindbox View Post
2) MadVR sends a frame in its buffer. MadVR has a precision timer, so it sends this frame at 60 fps with very small variation.
No, madVR doesn't have a "precision timer" (or at least not precise enough), and that's why your solution is not viable. Please re-read previous posts, especially my response to STaRGaZeR who had the same misunderstanding.

Also, you seem to be assuming that G-Sync/FreeSync works by the GPU gauging the frame rate over some period of time and switching the screen to a matching refresh rate. That's not how it works: when using these technologies there is no such thing as a "refresh rate" anymore, frames are simply sent to the screen at unspecified times whenever the system feels like it.

To reiterate my previous explanation: when using a fixed refresh rate, the refresh rate clock is a hardware clock. madVR simply makes sure the GPU has a pipeline of frames to display (through "present frames in advance" mechanisms), and the hardware clock on the GPU makes sure these frames are sent to the display with a precise, regular cadence according to the fixed refresh rate. When using variable refresh rate, there is no hardware clock anymore, the frames can't be presented in advance and have to be timed in software. This is very difficult to do in an accurate manner because Windows is not a real-time OS and therefore doesn't guarantee the madVR presentation thread will run at the exact time required to accurately present the next frame. Therefore, such a solution would probably have too much presentation (= refresh rate) jitter to be viable for high-quality video playback.

Quote:
Originally Posted by blindbox View Post
it only supports Displayport
Actually, that's not true anymore.

Last edited by e-t172; 13th June 2015 at 13:16.
e-t172 is offline   Reply With Quote
Old 13th June 2015, 13:58   #30977  |  Link
Siso
Soul Seeker
 
Siso's Avatar
 
Join Date: Sep 2013
Posts: 711
Quote:
Originally Posted by madshi View Post
I remember that problems with DPC latencies have been discussed in this thread a long time ago. I don't remember the details, anymore. Maybe you can do a search to find the old posts? In any case, madVR does not directly talk to any drivers, or do any funny stuff, so I'm not sure how madVR itself could cause DPC latencies. So I also wouldn't know how to do anything about it.

What happens if you play a GPU heavy game? Does that also up the DPC latencies in a similar way?
It appears that using lav cuvid makes the spikes go very high, tested with cpu decoding and dxva2 cp, with dxva cp is a little better, will try to experiment with the flush options but I do not know if that will help...I don't play games, so I cannot test with games.
Siso is offline   Reply With Quote
Old 13th June 2015, 14:19   #30978  |  Link
mortencombat
Registered User
 
Join Date: Jun 2015
Posts: 1
Let me first thank you for the utterly awesome tool that is madVR. I've been considering a Lumagen Radiance VP but as far as I can tell, the "only" things that a (very expensive, and currently only 2K in) Radiance would bring that madVR does not - for my use cases - is HDMI switching (obviously) and aspect ratio/crop/zoom/vertical shift for CIH/CIW setups.

Are there any plans to add aspect ratio/crop/zoom/vertical shift to madVR? I realize that some players already offer some combination of these, but rarely all.

I would be all over developing a companion application which would store various profiles (combinations of aspect ratio/crop/zoom/shift) for madVR and expose these over a REST API for IP based remote control (Roomie Remote, iRule, etc.).
mortencombat is offline   Reply With Quote
Old 13th June 2015, 14:52   #30979  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by mortencombat View Post
Let me first thank you for the utterly awesome tool that is madVR. I've been considering a Lumagen Radiance VP but as far as I can tell, the "only" things that a (very expensive, and currently only 2K in) Radiance would bring that madVR does not - for my use cases - is HDMI switching (obviously) and aspect ratio/crop/zoom/vertical shift for CIH/CIW setups.

Are there any plans to add aspect ratio/crop/zoom/vertical shift to madVR? I realize that some players already offer some combination of these, but rarely all.

I would be all over developing a companion application which would store various profiles (combinations of aspect ratio/crop/zoom/shift) for madVR and expose these over a REST API for IP based remote control (Roomie Remote, iRule, etc.).
I'm a CIH user myself, so you can rest assured that AR/crop/zoom/shift/masking etc are high on my personal to do list. However, what I personally wish for sometimes has to wait for what the majority of madVR users would benefit from...
madshi is offline   Reply With Quote
Old 13th June 2015, 15:04   #30980  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,342
Quote:
Originally Posted by madshi View Post
However, what I personally wish for sometimes has to wait for what the majority of madVR users would benefit from...
Sounds like you're doing it wrong.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is online now   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 12:27.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.