Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 24th May 2022, 03:50   #63121  |  Link
tyner
Registered User
 
Join Date: Aug 2011
Posts: 5
Best of these Mobile Graphics for DVD Scaling & Efficiency via madVR?

Among the laptops IĀ’m looking to buy some will have these graphics

https://www.notebookcheck.net/GeFor.....247598.0.html

I think the 1650ti may be the oldest of these mobile versions, but a user of same said he could use some number of the better settings in madVR to scale DVDs to 1080p. ThatĀ’s what I want to.

But of the above three, how do they compare at running madVR for quality 1080p scaling of commercially issued (Warners, CBS Paramount, Universal, Fox) movies and ~ 60 minute TV show DVDs?

And with least power consumption, heat and fan noise?

That is, which one may have the best of both, power and efficiency, for this task?

From what I can make of the Cinebench and power consumption scores at the above link, the p2000 seems like the best choice. Yes?

Last edited by tyner; 24th May 2022 at 19:43.
tyner is offline   Reply With Quote
Old 24th May 2022, 15:50   #63122  |  Link
flossy_cake
Registered User
 
Join Date: Aug 2016
Posts: 609
Quote:
Originally Posted by huhn View Post
madVR doesn't have a deinterlacer it just used the GPU provided deinterlacer if that's bad you get bad results if it is good you get good results.
Are you definitely sure about this?

I'm asking because on that Cowboy Bebop clip with [deint=Film], the Ctrl+J screen is actually reporting the cadence changes in realtime as I'm playing the intro sequence, as if to imply MadVR is detecting the cadence & is somehow involved in deinterlacing. Otherwise there would have to be some NVidia API where it's querying the cadence from the GPU driver?
flossy_cake is offline   Reply With Quote
Old 24th May 2022, 17:59   #63123  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,920
yes i am because that is inverse telecine not deinterlacing.
and it does not work on interlaced source only telecine sources.

yes madVR has it's own field matching algorithm IVTC.

a modern deinterlacer in a TV does both at the same time BTW.
but interlaced output is not really possible with madVR and in general kind special to do on a PC. interlaced mode Chroma scaling(there is a more official name for that i don't know it) and image scaling is not use anyware as far as i know.
huhn is offline   Reply With Quote
Old 24th May 2022, 19:41   #63124  |  Link
tyner
Registered User
 
Join Date: Aug 2011
Posts: 5
Would someone kindly reply to my post? As this is the madVR forum I'd expect it to be the best place for answers to my related hardware questions.

Last edited by tyner; 24th May 2022 at 19:51.
tyner is offline   Reply With Quote
Old 24th May 2022, 20:34   #63125  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,920
your link is dead.
laptop hardware is not easy to compare because the name of the GPU has no meaning how it is configurated.

you care about noise and power consumption but madVR is not power efficient it's hard if not impossible to judge a cooling system of a laptop they are not build to be silent under load.
huhn is offline   Reply With Quote
Old 24th May 2022, 20:50   #63126  |  Link
Sunspark
Registered User
 
Join Date: Nov 2015
Posts: 471
tyner, this is a hobbyist forum and non-official support. This product isn't publicly developed anymore and the developer hasn't posted here in years as he is working on another project now.

For the purpose you indicated of scaling interlaced 480 DVD to 1080p, any device even a little ARM device can do it. Even your monitor can do it. Higher quality settings, sure, cpu and gpu plays a role there, but your source file is only 480i and you'll be able to do a lot with any machine these days, so focus on the other factors that might be good for the other uses of the device. Screen, keyboard, etc.
Sunspark is offline   Reply With Quote
Old 24th May 2022, 21:35   #63127  |  Link
tyner
Registered User
 
Join Date: Aug 2011
Posts: 5
Quote:
Originally Posted by huhn View Post
your link is dead.
laptop hardware is not easy to compare because the name of the GPU has no meaning how it is configurated.

you care about noise and power consumption but madVR is not power efficient it's hard if not impossible to judge a cooling system of a laptop they are not build to be silent under load.
FWIW, does this work? https://www.notebookcheck.net/GeForc....247598.0.html
tyner is offline   Reply With Quote
Old 24th May 2022, 21:53   #63128  |  Link
tyner
Registered User
 
Join Date: Aug 2011
Posts: 5
Quote:
Originally Posted by huhn View Post
laptop hardware is not easy to compare because the name of the GPU has no meaning how it is configurated.

you care about noise and power consumption but madVR is not power efficient it's hard if not impossible to judge a cooling system of a laptop they are not build to be silent under load.
To quote the user:

"A very humble PC can play back DVD to 1080p. Coffee Lake iGPU, from what I read, should max out at about 4K UHD @ 24fps. I played a DVD on a laptop that has an NVidia GTX 1650, at 1080p over HDMI, using MadVR. I couldn't quickly find a setting that was too demanding. It was able to use Jinc to upscale chroma and image, with anti-ringing filter. A similar card would be around $220, and is possibly overkill at that.

This may really be a question of tradeoffs among size, noise, performance, and maintainability (e.g. the ability to open the case). Even with all that in mind, the Celeron in the little industrial PC is good enough that I don't bother to play back DVDs on the much-more-powerful laptop. Depends how far you want to take it, in which direction.
"

Considering the rest of his apparent hardware besides the 1650ti, it seems that he was able to use at least some of the upper madVR algorithms and yet didn't mention issues with fan noise and heat. My my laptop will likely be an HP 15" Zbook, SDD, 35 watt Comet Lake Xeon cpu. So I thought why not experiment between using the iGPU and the best of those three dedicated graphics. If the results balance well enough between scaling quality vs. heat/noise results with either iGPU or the discreet GPU then fine. Alternately, maybe the upgraded JRVR scaler in latest JRiver version?
tyner is offline   Reply With Quote
Old 24th May 2022, 22:04   #63129  |  Link
tyner
Registered User
 
Join Date: Aug 2011
Posts: 5
Quote:
Originally Posted by Sunspark View Post
tyner, this is a hobbyist forum and non-official support. This product isn't publicly developed anymore and the developer hasn't posted here in years as he is working on another project now.

For the purpose you indicated of scaling interlaced 480 DVD to 1080p, any device even a little ARM device can do it. Even your monitor can do it. Higher quality settings, sure, cpu and gpu plays a role there, but your source file is only 480i and you'll be able to do a lot with any machine these days, so focus on the other factors that might be good for the other uses of the device. Screen, keyboard, etc.
Agreed, but like with audio, everything counts. Thus, IF the processor in this year's Sony 48" to 55" OLED TV I may go with leaves my best DVDs looking too soft (which may not be surprising having to interpolate loads of missing data from a 480 image to intelligibly fill a 4K screen), then might feeding it a signal that's first scaled to 1080p yield an overall better looking image?

I'm not, of course, therefore, expecting perfection but I just thought that as I'm looking to buy a fairly muscular laptop for the long term (see last post) why not go for discrete graphics device that strikes the best balance between scaling performance and power draw.
tyner is offline   Reply With Quote
Old 24th May 2022, 22:09   #63130  |  Link
lvqcl
Registered User
 
Join Date: Aug 2015
Posts: 294
Quote:
Originally Posted by tyner View Post
Agreed, but like with audio, everything counts.
What do you mean?
lvqcl is offline   Reply With Quote
Old 25th May 2022, 19:29   #63131  |  Link
el Filou
Registered User
 
el Filou's Avatar
 
Join Date: Oct 2016
Posts: 896
Quote:
Originally Posted by tyner View Post
From those three, the 1650 Ti is better on paper because if you have more shader cores you can run the same workload at lower clocks for better efficiency and lower heat/noise, or run a higher workload, but as huhn said with laptops you need to see how the GPU is configured.
You could have one laptop with a 1024 cores part but badly designed cooling and it could get noisier than another with 768 cores running higher clocks. Some laptop manufacturers also limit the TDP and therefore the performance.

I have a 1050 Ti that is passively cooled in my HTPC and I can use NGU high to upscale DVDs and even the older HDR tonemapping to watch UHD BDs, but then its cooler is massive so I have no idea how that would translate to a laptop's performance (something lihgter like Jinc or Lanczos would be easy I think).
__________________
HTPC: Windows 10 22H2, MediaPortal 1, LAV Filters/ReClock/madVR. DVB-C TV, Panasonic GT60, Denon 2310, Core 2 Duo E7400 oc'd, GeForce 1050 Ti 536.40

Last edited by el Filou; 25th May 2022 at 19:39.
el Filou is offline   Reply With Quote
Old 27th May 2022, 10:01   #63132  |  Link
flossy_cake
Registered User
 
Join Date: Aug 2016
Posts: 609
Quote:
Originally Posted by tyner View Post
Agreed, but like with audio, everything counts. Thus, IF the processor in this year's Sony 48" to 55" OLED TV I may go with leaves my best DVDs looking too soft (which may not be surprising having to interpolate loads of missing data from a 480 image to intelligibly fill a 4K screen), then might feeding it a signal that's first scaled to 1080p yield an overall better looking image?
imo the low resolution of DVD seems to benefit a lot from MadVRs sharpening options (processing > image enhancements). I mentioned some sharpening presets I was working on here -- AdaptiveSharpen medium (strength = 0.4) seems to be my current preference if the source is a bit soft.

But the sharpening algorithm of my TV is interacting with it as well as I don't run 0 sharpness on my TV, so what I'm seeing isn't necessarily what you'll see.

Another factor is whether your PC is outputting 480p or 1080p/2160p. If the former, then your TV will probably do pre-resize sharpening to the 480p, and those sharpening artefacts will get enlarged when the TV upscales which tends to make the resulting image look much sharper and you might not feel the need to add any sharpening in MadVR.

Deinterlacing options will be important as well -- probably best to tick "automatically activate when needed" and untick both "disable automatic source type detection" and "only look at pixels in frame center". This is based on my assumption that MadVR's cadence detection is competent, but I haven't really checked it with enough sources to confirm that yet. DVDs and old content can sometimes have all kinds of janky cadence changes which can trip it up. I only have a few interlaced sources in my collection and I know in advance whether they are film or video based so I'm using the [deint=Film/Video] tag to the file/folder to force the best deinterlacing mode. By the sounds of it you have a large DVD collection and this could be too labour intensive to manually tag them all, hence my recommendation to use MadVRs auto detection.

Last edited by flossy_cake; 27th May 2022 at 10:07.
flossy_cake is offline   Reply With Quote
Old 27th May 2022, 19:41   #63133  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,920
auto detection doesn't work. if you have doubts you should use the deinterlacer.
huhn is offline   Reply With Quote
Old 28th May 2022, 08:59   #63134  |  Link
flossy_cake
Registered User
 
Join Date: Aug 2016
Posts: 609
Quote:
Originally Posted by huhn View Post
auto detection doesn't work.
I just checked it again now and you're right -- it doesn't work. It seems to just use video mode deinterlacing the whole time regardless of the source cadence.

Not sure why I didn't detect this earlier. I must have been looking for interlace mice teeth and not seeing them and thinking that must have meant it was working. When in reality it was just video mode deinterlacing.

So manually tagging parent folders or individual files with [deint=film/video] will be necessary to get the optimal deinterlacing. Although this is labour intensive, it's still much better than having to remember to manually change the setting in the GUI for every file, and it only needs to be done once per file. The fact that it works for parent folders is also a huge time saver. Most content tends to be film-based, so tagging top level folder with [deint=film] should suffice. However, for those exceptions when the content is video-based, it is somewhat annoying that tagging the file itself does not override the parent folder's tag. Although this could be worked around by placing all video-based files in a separately tagged folder.

Last edited by flossy_cake; 28th May 2022 at 09:13.
flossy_cake is offline   Reply With Quote
Old 31st May 2022, 15:14   #63135  |  Link
flossy_cake
Registered User
 
Join Date: Aug 2016
Posts: 609
The answer to this is most likely "no", but is it possible to have MadVR forcibly downscale the video resolution?

I've got some old TV shows which are HD 1080p remasters, but the shows were made before HD was a thing, and I'm not sure the artistic intent is quite correct in HD. It feels like I'm seeing too much. I suppose an analogy might be like running PS1 games in HD or something. I mean it's technically better, but also different. I'd like to try downscaling these shows to around 480p or 576p internally if possible.

flossy_cake is offline   Reply With Quote
Old 31st May 2022, 17:10   #63136  |  Link
ashlar42
Registered User
 
Join Date: Jun 2007
Posts: 655
I am sure this has already been discussed but... is there a way to have madVR deinterlacing working when outputting 10bit?

The intersections between LAV D3D11 Native decoding and D3D11 Copy-Back,in LAV and Direct3D11 for presentation, in madVR are not 100% clear to me.

I understand that with native decoding madVR does not deinterlace due to using a CPU algorithm for IVTC, but should D3D11 Copy-Back allow both 10 bit output and deinterlacing?

I ask before testing because in case it's possible I might need to debug, if it's known to be impossible, well... no point in testing.
__________________
LG 77C1 - Denon AVC-X3800H - Windows 10 Pro 22H2 - Kodi DSPlayer (LAV Filters, xySubFilter, madVR, Sanear) - RTX 4070 - Ryzen 5 3600 - 16GB RAM
ashlar42 is offline   Reply With Quote
Old 31st May 2022, 17:21   #63137  |  Link
lvqcl
Registered User
 
Join Date: Aug 2015
Posts: 294
Copy-back is the same as software decoding (w.r.t. madVR)
lvqcl is offline   Reply With Quote
Old 31st May 2022, 17:55   #63138  |  Link
flossy_cake
Registered User
 
Join Date: Aug 2016
Posts: 609
Well huhn has written that MadVR does not do any deinterlacing and the GPU does it. I compared my AMD vs NVidia GPU on those test files and they both deinterlace slightly differently despite running the same version and settings of MadVR and MPC-HC. I suspect MadVR performs cadence detection only, which could be used to tell GPU which method of deinterlacing to use, but that feature is broken anyway.

As for the comment that "copy-back doesn't do decoding w.r.t MadVR" I can't make sense of that statement as MadVR doesn't do any decoding as far as I can tell. I am using "DXVA2 copy-back" in LAV decoder and am definitely getting GPU accelerated decoding.
flossy_cake is offline   Reply With Quote
Old 31st May 2022, 21:23   #63139  |  Link
ashlar42
Registered User
 
Join Date: Jun 2007
Posts: 655
Quote:
Originally Posted by flossy_cake View Post
Well huhn has written that MadVR does not do any deinterlacing and the GPU does it. I compared my AMD vs NVidia GPU on those test files and they both deinterlace slightly differently despite running the same version and settings of MadVR and MPC-HC. I suspect MadVR performs cadence detection only, which could be used to tell GPU which method of deinterlacing to use, but that feature is broken anyway.
Asmodian's guide to madVR settings states that native D3D11 decoding stops IVTC from working because that's happening at CPU level and, with native, data never leaves the GPU (decoding -> madVR processing -> output or so I understood it)

madVR's IVTC is not functional when using DXVA2 or D3D11 native GPU decoding because madVR's IVTC algorithm runs on the CPU and the decoded video data is never copied to system memory.

Quote:
As for the comment that "copy-back doesn't do decoding w.r.t MadVR" I can't make sense of that statement as MadVR doesn't do any decoding as far as I can tell. I am using "DXVA2 copy-back" in LAV decoder and am definitely getting GPU accelerated decoding.
Yeah, that's just wrong. There used to be a time when DXVA2 copy-back was the suggested setting and it most definitely allowed for hardware decoding.
__________________
LG 77C1 - Denon AVC-X3800H - Windows 10 Pro 22H2 - Kodi DSPlayer (LAV Filters, xySubFilter, madVR, Sanear) - RTX 4070 - Ryzen 5 3600 - 16GB RAM
ashlar42 is offline   Reply With Quote
Old 31st May 2022, 21:25   #63140  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,920
Quote:
Originally Posted by ashlar42 View Post
I am sure this has already been discussed but... is there a way to have madVR deinterlacing working when outputting 10bit?

The intersections between LAV D3D11 Native decoding and D3D11 Copy-Back,in LAV and Direct3D11 for presentation, in madVR are not 100% clear to me.

I understand that with native decoding madVR does not deinterlace due to using a CPU algorithm for IVTC, but should D3D11 Copy-Back allow both 10 bit output and deinterlacing?

I ask before testing because in case it's possible I might need to debug, if it's known to be impossible, well... no point in testing.
some short facts.

madVR can't ivtc with native decode if it is DXVA2 or D3D11 it will not IVTC the data is not on the CPU where the algorithms needs it.
if copyback is used madVR can do everything because it doesn't care or may even know if it is software decoded or not.

10 bit can be deint (depends on your GPU driver as usual)
just software decode 10 bit or force p010 output play the file and press control + shift +alt + t or d or both. if you get video mode deint it will do it.

DXVA2 native can deint it is using the same API so it pretty much also ask it to deint.
D3D11 native can't deint at all madshi never implemented it and that's why it should not be recommended at all. this is just a problem with madVR D3D11 has a modern API for deint and as you can see in many other video renderer it work just the same or better as DXVA2.

the IVTC is not only doing cadence detection is is also field matching and repeat/field dropping it can even be applied to 60p with 24p in it and restore it. just the things an ivtc (inverse telecine) does.
huhn is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 11:43.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.