Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 1st January 2013, 13:37   #16681  |  Link
DragonQ
Registered User
 
Join Date: Mar 2007
Posts: 934
Quote:
Originally Posted by pie1394 View Post
You are right. it is not so widely used now...

So what format does the BBC1 HD (DVB-T2 8MHz ?) use now?

I just realized that I have not watched any BBC One HD sample made from 2012. The latest one I seen was from 2011.
It looks like all BBC HD contents have been sent in 1920x1080 format since May 30, 2012...
The experiments were made since 2011 Wimbledon...
Correct, just before the Jubilee celebrations the final 1440x1080i channels (BBC HD & BBC One HD on satellite, plus BBC HD, BBC One HD, ITV1 HD & Channel 4 HD on terrestrial) permanently switched to 1920x1080i.

Quote:
Originally Posted by pie1394 View Post
As I can tell, the Japan (ISDB-T 6MHz), Austrilla (DVB-T 7MHz?) terrestrial programs based on MPEG-2 video, they are still 1440x1080.

About the Korea (ATSC, DMB), 1920x1080 MPEG-2...

About the China (DMB), its HDTV program is 1920x1080 although still encoded with MPEG-2 format.

About the Hong Kong (DMB-T 8MHz) and Taiwan (DVB-T 6MHz), the HDTV programs are 1920x1080 encoded with H.264.
I imagine it's more likely to be seen in countries that still use MPEG2 for HD, except those that prefer 720p like the USA.
__________________
TV Setup: LG OLED55B7V; Onkyo TX-NR515; ODroid N2+; CoreElec 9.2.7
DragonQ is offline   Reply With Quote
Old 1st January 2013, 13:47   #16682  |  Link
shawkie
Registered User
 
Join Date: Jan 2012
Posts: 2
Quote:
Originally Posted by shawkie View Post
I'm running MPC-HC + madVR on an HD4000. At the moment I have disabled both the internal MPEG2 decoders in MPC-HC and enabled the Intel decoders in madVR instead. I'm having a couple of problems with DVD menus. Firstly, I'm not getting any kind of highlight on the selected menu item - this is going to make it impossible to navigate the menu using a remote control. Secondly, on "The Business" (UK) the menu doesn't respond to the mouse either.
Correction: "The Business" (UK) does respond to the mouse (I just hadn't found the right place to click).

Also, I've noticed that if I install the LAV Filters then the highlights start getting displayed. This happens even though I haven't configured either MPC-HC or madVR to use them. Does anyone know what is going on?
shawkie is offline   Reply With Quote
Old 1st January 2013, 22:36   #16683  |  Link
Luv
Registered User
 
Join Date: Nov 2009
Posts: 63
Madshi !
Grateful thanks for your awesome software.I'm using 85.4 right now,with Spline 4 taps,HD 4770,Catalyst 12.4,XP32.And it's working perfectly.

PS:
Madshi,there's a Intel Media SDK 2012 available.Is there something that needs to be updated (I have the Intel MPEG2 decoder enabled)?

Last edited by Guest; 1st January 2013 at 23:32. Reason: rule 13
Luv is offline   Reply With Quote
Old 2nd January 2013, 00:19   #16684  |  Link
Niyawa
Registered User
 
Niyawa's Avatar
 
Join Date: Dec 2012
Location: Neverland, Brazil
Posts: 169
Hey people. For those who remember me, I made a graphic based on madVR algorithms to help make a range between the "Lowest" and "Highest" (previously "Ultra", heavy word). You can see it below.



Thanks for madshi and 6233638 help, I've made the one above with the most accuracy possible based on their respective quality x performance range. "Low" is also madVR default now.

Now there's something different, again, I'm not making those graphic out of nothing, but because we want to implement it in KCP pack. Thus, now I'm looking for a graphic that can give people a decent reference on what their hardware can play, look below.



It's far from accurate especially the GPU section. I want to improve it but it's based on research and I'm limited to my own hardware. That's why I want everyone to contribute a little by giving me some feedback in what they are or not able to play without problems. Those who have less powerful GPUs are the one I'm looking for. Far as I know, even a GTX 260 can play the "Highest" without problems, but even still, if there's a GPU even less powerful that can still play something at some range, I'd like to know. Anything is appreciated.

Oh yeah, thanks madshi for your response regarding buffers and flush settings, I missed that for some reason. I still want to know about the overlay bug thing, you said you would look up into it. Just to add more, it seems that it also changes my video brightness when it's enabled (making it somewhat more bright), and it seems it's not something normal. Not to mention it disable some effects in Windows 8 (like the little shadow effect in the windows). I tried to make some samples but I can't get them with overlay enabled. Remember that I'm using Intel GMA HD4500 (is overlay even compatible with it?)

If anyone can help (or teach me like the last time), I'll appreciate it as well.
__________________
madVR scaling algorithms chart - based on performance x quality | KCP - A (cute) quality-oriented codec pack
Niyawa is offline   Reply With Quote
Old 2nd January 2013, 01:25   #16685  |  Link
shimaflarex
Registered User
 
Join Date: Oct 2011
Posts: 41
Quote:
Originally Posted by Niyawa View Post
CPU clock, VRAM, RAM, RAM speed have little (if any) influence on which preset should be chosen.

Things like GPU architecture, number of processing units on the GPU, GPU clock, GPU shader clock, GPU memory speed, GPU memory bus length etc are the parameters that matters the most, but how much each of those things affect the performance depends on the GPU manufacturer/architecture...

There's no way to create a table like the one you are trying to...The only viable approach is doing some sort of specific GPU benchmark.

Edit:Quoted the wrong table :P

Last edited by shimaflarex; 2nd January 2013 at 01:38.
shimaflarex is offline   Reply With Quote
Old 2nd January 2013, 01:45   #16686  |  Link
Niyawa
Registered User
 
Niyawa's Avatar
 
Join Date: Dec 2012
Location: Neverland, Brazil
Posts: 169
Quote:
Originally Posted by shimaflarex View Post
CPU clock, VRAM, RAM, RAM speed have little (if any) influence on which preset should be chosen.
Not exactly, RAM indeed does not make too much difference when you look at it, but the clock does. That is noticeable when you have a laptop for example, maybe not too much on a Desktop though. I know someone who had a dual with 2.0 GHz but couldn't play 1080p with 10-bit properly. The only thing he did was upgrade to a core quad i5 (Ivy) and does that flawlessly now. He used to have 100% of use but now it's only around 20-35%. That is he using madVR.

Quote:
Originally Posted by shimaflarex View Post
Things like GPU architecture, number of processing units on the GPU, GPU clock, GPU shader clock, GPU memory speed, GPU memory bus length etc are the parameters that matters the most, but how much each of those things affect the performance depends on the GPU manufacturer/architecture...
That is what makes it more difficult. For example, I can make a range of what can or not play something, but it's hard to make good one. I know my Intel can play "Lowest" without problems, but "Low" ahead I have to tune a little. GTX 260 can play "Highest" without problems, so I guess it's a starting point in what to look up.

Quote:
Originally Posted by shimaflarex View Post
There's no way to create a chart like the one you are trying to...The only viable approach is doing some sort of specific GPU benchmark.
I don't want to make a perfect or "super" accurate graph, but something we can use for reference. Something someone can look at and say "Oh, I guess I can play that". I've already did benchmark regarding the "Middle" to "Lowest", so the only one I really want is to know what can play "Highest" and "High" at the lowest starting point.

Again, that is only possible not only with benchmarking but with old or simply not really powerful GPUs.
__________________
madVR scaling algorithms chart - based on performance x quality | KCP - A (cute) quality-oriented codec pack
Niyawa is offline   Reply With Quote
Old 2nd January 2013, 02:04   #16687  |  Link
shimaflarex
Registered User
 
Join Date: Oct 2011
Posts: 41
Quote:
Originally Posted by Niyawa View Post
Not exactly, RAM indeed does not make too much difference when you look at it, but the clock does. That is noticeable when you have a laptop for example, maybe not too much on a Desktop though. I know someone who had a dual with 2.0 GHz but couldn't play 1080p with 10-bit properly. The only thing he did was upgrade to a core quad i5 (Ivy) and does that flawlessly now. He used to have 100% of use but now it's only around 20-35%. That is he using madVR.
He couldn't play Hi10p 1080p content because decoding those is very CPU intensive. It wouldn't matter if he was using the "Lowest" or the "Highest" profile, changing the scaling algorithm doesn't affect CPU usage at all, as those are implemented using GPU shader code. And no, RAM clock barely affects anything at all, and even if it did, there is the issue of which is better: higher clock or lower latency. It can affect the CPU decoding efficiency, but again, it is irrelevant to choosing the preset.

fyi, I have a 2.4Ghz 4-core CPU, 4GB of RAM, with a 512MB video card, and it is unable to use your "Low" preset.

Last edited by shimaflarex; 2nd January 2013 at 02:15.
shimaflarex is offline   Reply With Quote
Old 2nd January 2013, 02:07   #16688  |  Link
ralle_h
Registered User
 
Join Date: Oct 2012
Posts: 18
Hey madshi,

first of all I wanted to thank you for your great and continuous work on the best renderer out there. I really appreciate your efforts!

However, I'm having issues with madVR on one of my setups (on a lot of other desktop and HTPC's I did build there are no problems, so I doubt it is the configuration), which is an i3 2120 with the HD2000 IGPu, 8GB ram on a H77 PantherPoint Mainboard, running Windows Server 2008 R2. Furthermore I use the latest version of MPC-BE and LAV Splitter + Decoders for playback.

I think the OS could be the problem here, but I'm not sure and would appreciate any hint or tip coming from your side (or any other user who had similar issues or ideas about this).

1. Even though it's not true, the madVR menu shows that 2 instances of the player (in my case MPC-BE):



Quote:
MPC-BE<$8dc>
MPC-BE<$8dc>
2. At times I'm getting random crashes when I open any files. If I open the file again, it works, but sometimes it doesn't - it's totally random.

Here is a part of the error log:

Quote:
date/time : 2013-01-01, 23:15:04, 74ms
computer name : BOSS-SERVER
user name : Administrator <admin>
registered owner : Windows-Benutzer
operating system : Windows 2008 R2 x64 Service Pack 1 build 7601
system language : German
system up time : 40 seconds
program up time : 763 milliseconds
processors : 4x Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz
physical memory : 6670/7883 MB (free/total)
free disk space : (C 61,37 GB
display mode : 1920x1080, 32 bit
process id : $e54
allocated memory : 356,52 MB
command line : "C:\Program Files (x86)\MPC-BE\mpc-be.exe" "D:\files\test.mkv"
executable : mpc-be.exe
current module : MADHCNET.DLL
module date/time : 2012-09-23 10:14
version : 1.0.11.0
compiled with : Delphi XE
madExcept version : 4.0.5
callstack crc : $73622432, $d398d782, $861e5b99
exception number : 1
exception class : Exception
exception message : Access violation at address $73622432 in module 'd3d9.dll'. Read of address $0.

CFrameQueue::RenderThread ($f90): <priority:2>
73622432 +0000 d3d9.dll
4a462469 +0049 madVR.ax rendering.cpp 150 +4 CRendering.CreatePixelShader
4a46b234 +3fe4 madVR.ax rendering.cpp 2477 +777 CRendering.Config
4a42d958 +2398 madVR.ax framequeue.cpp 8800 +520 CFrameQueue.RenderThread
4a40fdd6 +0006 madVR.ax framequeue.cpp 165 +0 Queue_RenderThread
75bc33a8 +0010 kernel32.dll BaseThreadInitThunk

main thread ($e58):
758a7908 +26 USER32.dll GetMessageW
75bc33a8 +10 kernel32.dll BaseThreadInitThunk
I would gladly email/post the whole log to you (via PM), if you want me to.

If you have any questions about my system please feel free to ask!

Kind regards,

Last edited by ralle_h; 2nd January 2013 at 03:17.
ralle_h is offline   Reply With Quote
Old 2nd January 2013, 02:17   #16689  |  Link
baii
Registered User
 
Join Date: Dec 2011
Posts: 180
It highly dependent on the content type, maybe you should make separate tables.

CPU indeed does not matter if you do not use DXVA modes, but it is kind of tricky if you do `~
Also putting CPU into levels is even harder than GPU due to architecture changes ~~.

Last edited by baii; 2nd January 2013 at 02:19.
baii is offline   Reply With Quote
Old 2nd January 2013, 03:15   #16690  |  Link
Mangix
Audiophile
 
Join Date: Oct 2006
Posts: 353
Not sure if it's pushing it but I would bump up chroma to Jinc on high. I currently run Jinc3/Lanczos3/Catmull-Rom on a GTS 450. Jinc3 works for luma but gets frame drops.
Mangix is offline   Reply With Quote
Old 2nd January 2013, 03:45   #16691  |  Link
SamuriHL
Registered User
 
SamuriHL's Avatar
 
Join Date: May 2004
Posts: 5,351
I run Jinc3 AR/Jinc3 AR/Lanczos4 AR LL on my 450. No frame drops. That's with DXVA CB in LAV filters and MC18 on an i5-2400s.
__________________
HTPC: Windows 11, AMD 5900X, RTX 3080, Pioneer Elite VSX-LX303, LG G2 77" OLED
SamuriHL is online now   Reply With Quote
Old 2nd January 2013, 04:40   #16692  |  Link
jmone
Registered User
 
Join Date: Dec 2007
Posts: 652
Quote:
Originally Posted by SamuriHL View Post
I run Jinc3 AR/Jinc3 AR/Lanczos4 AR LL on my 450. No frame drops. That's with DXVA CB in LAV filters and MC18 on an i5-2400s.
And then it get more complicated as this setup is fine... till you get 1080/60i material and with HW Deinterlacing it pushed it over the edge. I had to move up to the GTX660 to get all formats I have playing with Jinc3 AR (it was the AR that pushed it too hard on the 60i stuff).
jmone is offline   Reply With Quote
Old 2nd January 2013, 04:41   #16693  |  Link
SamuriHL
Registered User
 
SamuriHL's Avatar
 
Join Date: May 2004
Posts: 5,351
Except that I've done 1080/60i on that setup. No frame drops. Perfectly smooth. Obviously deinterlacing with madVR.
__________________
HTPC: Windows 11, AMD 5900X, RTX 3080, Pioneer Elite VSX-LX303, LG G2 77" OLED
SamuriHL is online now   Reply With Quote
Old 2nd January 2013, 05:36   #16694  |  Link
dansrfe
Registered User
 
Join Date: Jan 2009
Posts: 1,210
So, while I've been looking at GTX 680s and GTX 690s this weekend I decided to reformat my other desktop build that has an old GeForce 9500GT card and installed the latest builds of MPC-HC, LAV Filters, and madVR and set chroma upscaling to Bicubic 75 with AR and image upscaling to Jinc 3 without AR. Previously I was restricted to having AR disabled and using Lanczos 3 for image upscaling in order to not drop frames.

Three questions:

1) What is benefit in buying a high-end GTX 6xx card now vs buying a much cheaper GTX 4xx or 5xx series since those would certainly be able to play videos at max settings if my 9500 GT can.

2) Why do some users report having frame drops with a GTX 4xx or GTX 5xx with max settings?

3) What should I be looking for in a GPU with respect to madVR's processing/scaling/etc. needs? I've seen many benchmarks and read gaming forums in which some people say a variety of different things in a graphics card are equally important and others say that some things are more important than others when looking for a graphics card.
dansrfe is offline   Reply With Quote
Old 2nd January 2013, 06:12   #16695  |  Link
baii
Registered User
 
Join Date: Dec 2011
Posts: 180
Newer card have support for newer directx features for games, also they tend to be more power efficient.

4xx or 5xx doesn't have real meaning, it is the second letter of the card model that determine if it is a high end card. Also contents and setting have alot to do with frame drop. Everyone is playing different content in different setups.
baii is offline   Reply With Quote
Old 2nd January 2013, 06:16   #16696  |  Link
Niyawa
Registered User
 
Niyawa's Avatar
 
Join Date: Dec 2012
Location: Neverland, Brazil
Posts: 169
Quote:
Originally Posted by shimaflarex View Post
He couldn't play Hi10p 1080p content because decoding those is very CPU intensive. It wouldn't matter if he was using the "Lowest" or the "Highest" profile, changing the scaling algorithm doesn't affect CPU usage at all, as those are implemented using GPU shader code. And no, RAM clock barely affects anything at all, and even if it did, there is the issue of which is better: higher clock or lower latency. It can affect the CPU decoding efficiency, but again, it is irrelevant to choosing the preset.

fyi, I have a 2.4Ghz 4-core CPU, 4GB of RAM, with a 512MB video card, and it is unable to use your "Low" preset.
That makes much more sense, but then, is the CPU table even usable at all? I should remove the RAM I guess. Should we just rely on the GPU then?

Quote:
Originally Posted by baii View Post
It highly dependent on the content type, maybe you should make separate tables.

CPU indeed does not matter if you do not use DXVA modes, but it is kind of tricky if you do `~
Also putting CPU into levels is even harder than GPU due to architecture changes ~~.
I know it really depends a lot. There's some hardware out there that even though is 4-6 years old can still playback "Highest" at it's finest, but mine that is only 2-3 can't even do Middle decently. Of course, I'm aware of different architectures.

Quote:
Originally Posted by Mangix View Post
Not sure if it's pushing it but I would bump up chroma to Jinc on high. I currently run Jinc3/Lanczos3/Catmull-Rom on a GTS 450. Jinc3 works for luma but gets frame drops.
My memory fails a little, but I remember someone (I guess it was madshi) saying that Jinc is too GPU intensive compared to the others, so putting it on "High" would be asking for too much. Again, it depends a lot on the hardware, but going for the actual reference, it's better to have that on the "Highest" only.

Quote:
Originally Posted by SamuriHL View Post
I run Jinc3 AR/Jinc3 AR/Lanczos4 AR LL on my 450. No frame drops. That's with DXVA CB in LAV filters and MC18 on an i5-2400s.
I want to believe you. But...

Quote:
Originally Posted by jmone View Post
And then it get more complicated as this setup is fine... till you get 1080/60i material and with HW Deinterlacing it pushed it over the edge. I had to move up to the GTX660 to get all formats I have playing with Jinc3 AR (it was the AR that pushed it too hard on the 60i stuff).
This happens. And-

Quote:
Originally Posted by SamuriHL View Post
Except that I've done 1080/60i on that setup. No frame drops. Perfectly smooth. Obviously deinterlacing with madVR.
We get to this again.

When people said to me that making a graph would be difficult, I believed in them, but I'm starting to think even a reference one would be impossible too.

Quote:
Originally Posted by dansrfe View Post
1) What is benefit in buying a high-end GTX 6xx card now vs buying a much cheaper GTX 4xx or 5xx series since those would certainly be able to play videos at max settings if my 9500 GT can.
This is just my pick, but the improved performance and less energy power needed is a good combination.

Quote:
Originally Posted by dansrfe View Post
2) Why do some users report having frame drops with a GTX 4xx or GTX 5xx with max settings?
We can't really answer that, it depends a lot on the user's hardware.

Quote:
Originally Posted by dansrfe View Post
3) What should I be looking for in a GPU with respect to madVR's processing/scaling/etc. needs? I've seen many benchmarks and read gaming forums in which some people say a variety of different things in a graphics card are equally important and others say that some things are more important than others when looking for a graphics card.
It also depends, most of graphic cards out there are focused on game performance. So we use benchmarks with fps to see which one gives better results... if we were to use a benchmark for madVR instead, I guess the rendering time (ms) would be the main factor? There's seems to be a limit here where I can use while not having dropped frames, but that's just my assumption based on my own eyes, not really numbers... which is bad in this situation.
__________________
madVR scaling algorithms chart - based on performance x quality | KCP - A (cute) quality-oriented codec pack
Niyawa is offline   Reply With Quote
Old 2nd January 2013, 06:36   #16697  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
@Niyawa

Minimum VRAM should be based on the user's desktop resolution, and will always be the same for all the presets except lowest (Bilinear). If you want a basic guideline, it should be something like the following for optimal performance:

1366x768: 256MB VRAM
1600x900: 384MB VRAM
1920x1080: 512MB VRAM
2560x1440: 786MB VRAM

As for VRAM bandwidth, 15GB/s is around the bare minimum needed for playback, but you probably want 20GB/s+ to give a bit of performance headroom.

That said, if you want to automatically pick madVR settings presets for a user, it can't be done with a simple chart like you have. You either will to build a GPU database, and/or calculate theoretical Pixel Fillrate, Texel Fillrate, and Memory Bandwidth values on the fly. Shader performance determines 99% of madVR performance, yet you've left it out of your chart entirely. Alternatively, you could wait for madshi to release an updated madVR benchmark, and create a DB that way. Also keep in mind that pushing the limits of what a GPU is capable of with 24fps video is a bad idea, since it will result in completely unwatchable 60fps with constant dropped frames. For example, there are still a minority of anime Blu-rays which are released as 1080i, which result in 24fps / 60fps VFR.

Without doing the above, a codec pack should only be deciding if a user needs to use Bilinear-only, instead of the madVR defaults, to help ensure reliable playback on install. For other more intensive & high-quality madVR presets, it would be best to have the user manually toggle them in the codec pack's settings app after install. At that point, if they don't work well, it would be the user's fault.
cyberbeing is offline   Reply With Quote
Old 2nd January 2013, 08:02   #16698  |  Link
pie1394
Registered User
 
Join Date: May 2009
Posts: 212
Quote:
Originally Posted by SamuriHL View Post
Except that I've done 1080/60i on that setup. No frame drops. Perfectly smooth. Obviously deinterlacing with madVR.
1440x1080i60 --> 1920x1080p60 ?

If it still does not cause any frame dropping with Jinc3+AR mode, it means the Fermi core's efficiency is much better than the Tesla core's. Or the different per-pixel-adaptive deinterlacing algorithms are performed on these two GPUs.

Your GTS450 : 783 MHz 192 core + 128-bit DDR5-3600

My GTX260+ : 680 MHz 216 core + 384-bit DDR3-2000

Last edited by pie1394; 2nd January 2013 at 08:06.
pie1394 is offline   Reply With Quote
Old 2nd January 2013, 08:12   #16699  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Madshi, can overlay's transition to windowed mode fullscreen be improved? The windowed image tends to stick on screen for a bit, with the enlarged image sometimes shifting back and forward a few frames before playing normally whilst the windowed image is still visable in the top left corner. FSE mode when enabled provides a much smoother transition without any issues. On HD3000/4000 7 x64.
ryrynz is offline   Reply With Quote
Old 2nd January 2013, 10:09   #16700  |  Link
Niyawa
Registered User
 
Niyawa's Avatar
 
Join Date: Dec 2012
Location: Neverland, Brazil
Posts: 169
Quote:
Originally Posted by cyberbeing View Post
Minimum VRAM should be based on the user's desktop resolution, and will always be the same for all the presets except lowest (Bilinear). If you want a basic guideline, it should be something like the following for optimal performance:

1366x768: 256MB VRAM
1600x900: 384MB VRAM
1920x1080: 512MB VRAM
2560x1440: 786MB VRAM
Thanks, I'll be using that info.

Quote:
Originally Posted by cyberbeing View Post
As for VRAM bandwidth, 15GB/s is around the bare minimum needed for playback, but you probably want 20GB/s+ to give a bit of performance headroom.
Thanks for your thoughts on this as well.

Quote:
Originally Posted by cyberbeing View Post
That said, if you want to automatically pick madVR settings presets for a user, it can't be done with a simple chart like you have. You either will to build a GPU database, and/or calculate theoretical Pixel Fillrate, Texel Fillrate, and Memory Bandwidth values on the fly. Shader performance determines 99% of madVR performance, yet you've left it out of your chart entirely. Alternatively, you could wait for madshi to release an updated madVR benchmark, and create a DB that way. Also keep in mind that pushing the limits of what a GPU is capable of with 24fps video is a bad idea, since it will result in completely unwatchable 60fps with constant dropped frames. For example, there are still a minority of anime Blu-rays which are released as 1080i, which result in 24fps / 60fps VFR.
Actually, I didn't "left-out". I was going to do more research in GPU clock and other matters including this one, you gave me the heads up early then I thought, haha. As for the 24fps, I have some samples with 60 fps here, but I admit I never got around testing them. Now a database huh.... well. Doing some filtering and some more research I could make a list of supported GPUs and their respective presets, the problem will be how we'll get that working and what algorithm we will use, since we still don't know what's the GPU at lowest range that can play "Highest" preset. We could work around that later thought.

Quote:
Originally Posted by cyberbeing View Post
Without doing the above, a codec pack should only be deciding if a user needs to use Bilinear-only, instead of the madVR defaults, to help ensure reliable playback on install. For other more intensive & high-quality madVR presets, it would be best to have the user manually toggle them in the codec pack's settings app after install. At that point, if they don't work well, it would be the user's fault.
That's a pretty good point, but we did those presets (the algorithm one) to make a convenient yet good range of quality x performance in one-click. For now that's what we've been doing: make the user select what he wants, "Middle" is default since most modern machines can play it (though I'm starting to doubt it). We're already working around a tool that resets the presets without needing to re-install, so soon enough it won't be a problem if he selects something that it's not up to the job.

Hell, I have to agree with basically most of what you said. For now I should put the graph in On-Hold then. We'll get around that when madVR is close to 1.x and the pack is more user-friendly. But thanks cyber and everyone, you guys have been a big help. Don't mind if I come here again with some more things to discuss, you all are everything I got.
__________________
madVR scaling algorithms chart - based on performance x quality | KCP - A (cute) quality-oriented codec pack
Niyawa is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:34.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.