Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 26th March 2014, 20:52   #25341  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
One thing to note is that to be able to see one lone frame drop (my current issue) you need to have reasonably accurate video playback in the first place (i.e. using smooth motion or matching refresh rate). If your video playback is not smooth to begin with (e.g. "raw" 24p@60Hz) then it is unlikely you will notice a frame drop, because it will be buried into the overall judder.
e-t172 is offline   Reply With Quote
Old 26th March 2014, 21:12   #25342  |  Link
seiyafan
Registered User
 
Join Date: Feb 2014
Posts: 162
Besides nearest neighbor which is the lowest quality setting, all the other settings from bilinear to NNEDI3 I cannot tell any difference between them. To make things worse I can't tell nearest neighbor from NNEDI3 in chroma upscaling, lol! How....? Do you really need a projector to see the difference?
seiyafan is offline   Reply With Quote
Old 26th March 2014, 21:29   #25343  |  Link
QBhd
QB the Slayer
 
QBhd's Avatar
 
Join Date: Feb 2011
Location: Toronto
Posts: 697
One thing I have noticed about trying to tell the differences between algorithms... watching stills/frozen frames/same scene over and over doesn't seem to help. But I DO notice the differences if I watch for extended periods of time. It's more about NOT noticing things.

QB
__________________
QBhd is offline   Reply With Quote
Old 26th March 2014, 22:35   #25344  |  Link
GREG1292
Registered User
 
Join Date: Aug 2007
Location: Fort Wayn,Indiana
Posts: 52
Had a darbee hooked up for a short time before needi32 and
was ok with a panny 7000. For me it is not needed with my current
Dlp projector and latest Madvr.
Tried to do a smaller image not red on the projector just trying to post
an image on sharpness with Madvr removed for not posting correctly

Last edited by GREG1292; 27th March 2014 at 01:38.
GREG1292 is offline   Reply With Quote
Old 27th March 2014, 00:48   #25345  |  Link
QBhd
QB the Slayer
 
QBhd's Avatar
 
Join Date: Feb 2011
Location: Toronto
Posts: 697
Image far far too large

And why is Gandalf's skin so pink!

QB
__________________
QBhd is offline   Reply With Quote
Old 27th March 2014, 03:26   #25346  |  Link
renethx
Registered User
 
Join Date: Feb 2008
Posts: 45
Quote:
Originally Posted by Niyawa View Post
Interesting. I guess no one here has any sort of idea where we can set a deviation to that change...
In case you haven't looked at

http://www.avsforum.com/t/1477339/450#post_24514253

Here is a summary:

- PCI Express version (1.1 vs 2.0 vs 3.0) is the greatest factor that affects OpenCL <-> D3D9 interoperation in AMD GPU. madshi speculates that AMD internally does the interop by using some sort of copyback between CPU RAM and GPU RAM.
- Haswell PCIe 3.0 is the fastest, then IVB PCIe 3.0 (3.0 is supported by only Core i5 and higher), then AMD (Kaveri) in the benchmark. However, the difference between them is smaller in the real-world video playback (measured by rendering time).
- The lower the PCIe link speed is, the higher the GPU utilization (meausured with HWiNFO64) is. I am not sure how to interpret it.
- If you insert a device in the second or third PCIe 3.0 x16 slot of a Z87 chipset motherboard, the graphics card works only in PCIe 3.0 x8 or even x4 link, that is of the same bandwidth as PCIe 2.0 x16 or PCIe 1.1 x16. So be careful. In H87 / B85 chipset motherboards, there is no such problem, because the second PCIe x16 slot (if exists) is connected to the chipset (and works at 2.0 x4).



BTW using Bilinear with NNEDI3 is to fill the gap between Jinc3 and N16 x BC75AR, and the gap between N16 x BC75AR and N32 x BC75AR. Sometimes GPU may not be powerful enough for N16 x BC75AR, but enough for N16 x Bilinear. PQ is still better than Jinc3 in most cases.

Lanczos is very close to Bicubic in quality and I don't think it's worth its own level. Maybe level 2a = Bicubic, level 2b = Lanczos. There is a huge gap in both quality and speed between level 1, 2, 3, 4 and 5, while the difference in quality between a and b is very small (that's why I chose "a", "b"; "c' could be Jinc, but the difference between b and c is very small and NNEDI3 x c is too slow for most cases).

Last edited by renethx; 27th March 2014 at 13:04.
renethx is offline   Reply With Quote
Old 27th March 2014, 12:12   #25347  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,812
Now I got a GTX 780 Ti@1.25Ghz and still I can't make use of Error Diffusion, with 4k GPU load at 100% and frame drops...
aufkrawall is offline   Reply With Quote
Old 27th March 2014, 13:15   #25348  |  Link
Kalanoch
Registered User
 
Join Date: Jan 2014
Posts: 10
Quote:
Originally Posted by qduaty View Post
I have nnedi3 working on GTX660/GK106 and Nvidia 335.23 drivers (and, it's 20% faster). I blocked deleting of OpenCL/D3D interop images between frames (madvr gets existing instances mapped to IDirect3DTexture9*) and it seems to fix the problem entirely.
This is huge, if this is legit and doesn't negatively affect any operations/image quality. (Hopefully it is because I'd like to use the latest nvidia drivers!)
Kalanoch is offline   Reply With Quote
Old 27th March 2014, 13:20   #25349  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 447
Edit: Hah! It seems Kalanoch had the same idea - I guess I should have refreshed.

Quote:
Originally Posted by qduaty View Post
I have nnedi3 working on GTX660/GK106 and Nvidia 335.23 drivers (and, it's 20% faster). I blocked deleting of OpenCL/D3D interop images between frames (madvr gets existing instances mapped to IDirect3DTexture9*) and it seems to fix the problem entirely.
I'm curious what madshi will think of this. He's usually pretty good at responding to everything but just so it doesn't get lost in the noise..
Ver Greeneyes is offline   Reply With Quote
Old 27th March 2014, 15:20   #25350  |  Link
nand chan
( ≖‿≖)
 
Join Date: Jul 2011
Location: BW, Germany
Posts: 380
BT.2020 support

How does madVR's support for BT.2020 work? madVR does not support constant-luminance encoding at all currently? How/when will it do so in the future?

How does color management work in the presence of BT.2020? How is BT.2020 content displayed on BT.709 monitors and vice versa? Do you intend to rely on the .3dlut being created against BT.2020? How will you transform into this space? Does madVR detect and respect the --primaries tag? What does it do in the absence of a .3dlut?

Do you do so with constant luminance? Which gamma transfer function do you use to de/encode BT.2020 and BT.709? How will it work moving forwards to a constant-luminance environment? How do plan on dealing with rounding/clipping artifacts on BT.709 and BT.2020 source 3dluts, especially for wide gamut profiles?

When will madVR support .ICC profiles, when will madVR support changing .ICC profiles at runtime (eg. if you move between monitors)?
__________________
Forget about my old .3dlut stuff, just use mpv if you want accurate color management
nand chan is offline   Reply With Quote
Old 27th March 2014, 15:37   #25351  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
So I'm trying to put my GPU fan speed evil plan to action, using BarelyClocked.exe but it's no workee atm, any idea please?



30% is only here to easily notice the fan speed increase but ideally I want mVR to set the fan speed to 20% while a movie is running and put it back to 18%(inaudible) once my media player is closed. The command lines do work fine on their own and the profile is properly loaded as it would appear

Quote:
Originally Posted by Asmodian View Post
a single new variable instead, pixels-per-second.
This would help a lot with lower resolution interlaced material, I want to set higher neuron counts for 480p30 than 480p60 but I also don't want very complex or numerous if statements. I could use the pixels-per-second instead of width, height
Very good point, I can't do 64x NNEDI for any 720p29.97 or <2.35 720p25 so a "pixels-per-second" argument would hit the spot

I would also appreciate a way to disable the crash reporter of mVR as PotP would appear to make it crash randomly when using its seamless playback feature.

Last edited by leeperry; 27th March 2014 at 18:31.
leeperry is offline   Reply With Quote
Old 27th March 2014, 15:43   #25352  |  Link
seiyafan
Registered User
 
Join Date: Feb 2014
Posts: 162
Speaking of color management and monitor calibration, there was a very active discussion on gamma a week ago, then it suddenly died, what was the conclusion? Is 2.2 still the best gamma to use to calibrate monitors instead of sGRB and L*?
seiyafan is offline   Reply With Quote
Old 27th March 2014, 15:44   #25353  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,926
Quote:
Originally Posted by nand chan View Post
How does color management work in the presence of BT.2020? How is BT.2020 content displayed on BT.709 monitors and vice versa? Do you intend to rely on the .3dlut being created against BT.2020? How will you transform into this space? Does madVR detect and respect the --primaries tag? What does it do in the absence of a .3dlut?

Do you do so with constant luminance? Which gamma transfer function do you use to de/encode BT.2020 and BT.709? How will it work moving forwards to a constant-luminance environment? How do plan on dealing with rounding/clipping artifacts on BT.709 and BT.2020 source 3dluts, especially for wide gamut profiles?

When will madVR support .ICC profiles, when will madVR support changing .ICC profiles at runtime (eg. if you move between monitors)?
madvr supports bt 2020. encode tag are readed and used. madvr treats your display as bt 709 so everything that's not bt 709 is transfert to it. you can change this in calibration and bt 2020 is there too, even DCI-p3.

so you don't need a 3d lut a 3d lut is just more accurate than a noraml calibration with a icc file.
huhn is offline   Reply With Quote
Old 27th March 2014, 15:48   #25354  |  Link
nand chan
( ≖‿≖)
 
Join Date: Jul 2011
Location: BW, Germany
Posts: 380
Quote:
Originally Posted by huhn View Post
so you don't need a 3d lut a 3d lut is just more accurate than a noraml calibration with a icc file.
Okay, good to know that the monitor gamut is customizable. I don't understand this line though, what do you mean “normal calibration with a icc file”? I thought madVR *only* supports calibration via 3dlut, or has that changed? Won't it generate a 3dlut from the .icc either way? Doing the full ICC calculation at runtime on the GPU seems like an odd thing to do, or does it only extract the primaries/transfer characteristics and implement those, ignoring detailed stuff like LUT tables in the profile?
__________________
Forget about my old .3dlut stuff, just use mpv if you want accurate color management
nand chan is offline   Reply With Quote
Old 27th March 2014, 16:05   #25355  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by seiyafan View Post
Speaking of color management and monitor calibration, there was a very active discussion on gamma a week ago, then it suddenly died, what was the conclusion? Is 2.2 still the best gamma to use to calibrate monitors instead of sGRB and L*?
IIRC the discussion was about the gamma to use in internal madVR dithering/error diffusion algorithms, which has nothing to do with monitor calibration. Regarding monitor gamma the agreed upon standard is ITU-R BT.1886 and I believe there is now wide consensus on that (at least I haven't seen any compelling arguments against it).

For comparison, BT.1886 is equivalent to 2.4 pure power law on a perfect screen with infinitely deep black levels, and is roughly similar to sRGB gamma on typical IPS screens that have a 1000:1 contrast ratio.

Besides, what the hell is "L* gamma"?

Quote:
Originally Posted by nand chan View Post
Okay, good to know that the monitor gamut is customizable. I don't understand this line though, what do you mean “normal calibration with a icc file”? I thought madVR *only* supports calibration via 3dlut, or has that changed? Won't it generate a 3dlut from the .icc either way? Doing the full ICC calculation at runtime on the GPU seems like an odd thing to do, or does it only extract the primaries/transfer characteristics and implement those, ignoring detailed stuff like LUT tables in the profile?
I believe when huhn wrote "noraml calibration with a icc file" he meant the vcgt (gamma ramps) that are often found in ICC profiles, which has nothing to do with gamut mapping.

madVR does not support ICC, and as far as I know madshi has no plans to implement it. You're stuck with 3DLUTs which are much less convenient to use. I have no idea how madVR does gamut mapping when you feed it BT.2020 content and ask it to convert it to BT.709. That being said, by generating a 3DLUT using a CMS such as Argyll you can get complete control over the gamut mapping since it becomes the responsibility of the 3DLUT, not madVR's.

Last edited by e-t172; 27th March 2014 at 17:14.
e-t172 is offline   Reply With Quote
Old 27th March 2014, 16:47   #25356  |  Link
michkrol
Registered User
 
Join Date: Nov 2012
Posts: 167
Quote:
Originally Posted by leeperry View Post
So I'm trying to put my GPU fan speed evil plan to action, using BarelyClocked.exe but it's no workee atm, any idea please?
(cut)
The command lines do work fine on their own and the profile is properly loaded as it would appear
I do remember madshi mentioning the command line execution for profiles is not implemented yet. Can't find the exact post right now.
michkrol is offline   Reply With Quote
Old 27th March 2014, 17:17   #25357  |  Link
seiyafan
Registered User
 
Join Date: Feb 2014
Posts: 162
Quote:
Originally Posted by e-t172 View Post
IIRC the discussion was about the gamma to use in internal madVR dithering/error diffusion algorithms, which has nothing to do with monitor calibration. Regarding monitor gamma the agreed upon standard is ITU-R BT.1886 and I believe there is now wide consensus on that (at least I haven't seen any compelling arguments against it).

For comparison, BT.1886 is equivalent to 2.4 pure power law on a perfect screen with infinitely deep black levels, and is roughly similar to sRGB gamma on typical IPS screens that have a 1000:1 contrast ratio.

Besides, what the hell is "L* gamma"?
Here are the options in my calibration software:
seiyafan is offline   Reply With Quote
Old 27th March 2014, 17:26   #25358  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Okay, so from a quick Google search it seems L* is a weird gamma spec that has nothing to do with video (it seems more intended for printing or other editing work). Don't use it.

Your software doesn't seem to have an option for BT.1886, which is not surprising (maybe that's what "HDTV (ITU-R)" is referring to but that's not clear from your screenshot). In that case a good approximation is to use sRGB assuming you're using a typical IPS screen (1000:1 contrast ratio).

If you want to have complete control other what you're doing I would recommend using Argyll CMS to calibrate your screen and generate a 3DLUT, but that's more technically involved.
e-t172 is offline   Reply With Quote
Old 27th March 2014, 18:14   #25359  |  Link
Niyawa
Registered User
 
Niyawa's Avatar
 
Join Date: Dec 2012
Location: Neverland, Brazil
Posts: 169
Quote:
Originally Posted by neuron2 View Post
I respectfully disagree. Your own experience tells you about only...your own experience. And what is "eye training" anyway? 6233638 says he can easily see single frame drops, and I can too. I think it would be the exceptional case to find someone that cannot. Why do you think people complain about telecine judder? It's only a single field jerk compared to a full frame drop, and yet it is easily detected. Human vision is finely tuned for tracking motion; it's not surprising that we can detect gross discontinuities like frame drops.
Let my clarify. I once couldn't tell when a frame was dropped at all. I know many people close to me who can't either. Those same people can't tell the difference between a TN and IPS screen as well, hell some of my other friends can't even tell when a 4:3 movie was stretched to 16:9. In my case, I've watched many samples and situations where a frame was being dropped/not dropped with the same content, and after some time, it became easy for me to differ stutter and other issues that dropped frames brings (thus the 'eye training', maybe 'eye got used to it' would also be appropriate). There's not much I know, in a technical level at least, about video in general but from both an observation and my own experience I can say with clarity that not everyone perceives the same thing in the same way. So no, you shouldn't be surprised that some people can't tell that a frame is being dropped.

Quote:
Originally Posted by renethx View Post
BTW using Bilinear with NNEDI3 is to fill the gap between Jinc3 and N16 x BC75AR, and the gap between N16 x BC75AR and N32 x BC75AR. Sometimes GPU may not be powerful enough for N16 x BC75AR, but enough for N16 x Bilinear. PQ is still better than Jinc3 in most cases.

Lanczos is very close to Bicubic in quality and I don't think it's worth its own level. Maybe level 2a = Bicubic, level 2b = Lanczos. There is a huge gap in both quality and speed between level 1, 2, 3, 4 and 5, while the difference in quality between a and b is very small (that's why I chose "a", "b"; "c' could be Jinc, but the difference between b and c is very small and NNEDI3 x c is too slow for most cases).
Thanks for that link and the summaries.

The only time bilinear made any outstanding difference for me when quality was concerned was with the upscale algorithm, but in chroma it shouldn't be much of a trade-off. madshi told me that the GPU power of bilinear vs any other algorithm is like day and night so if someone is looking to try NNEDI it makes sense. Unfortunately I don't currently possess the hardware to try all of those levels so I appreciate your thoughts on them.

Those graphs are also based on a 1080p>1440p upscale if I'm not wrong, if I could I'd scale the performance data but that's probably not the best way to go about it. I'd like to know what results 720p>1080p would give.
__________________
madVR scaling algorithms chart - based on performance x quality | KCP - A (cute) quality-oriented codec pack
Niyawa is offline   Reply With Quote
Old 27th March 2014, 18:39   #25360  |  Link
daglax
Registered User
 
Join Date: Mar 2014
Posts: 30
So i did a lot of testing today, if or if not NNEDI is really worth the trouble and i came to the conclusion, that it's not. I don't know what you guys are seeing or believe what you're seeing, but i sometimes think, i'm in a homeopathy-forum.

Here's is a test for you: I took 3 screenshots, one with NNEDI 32 neurons (bicubic upscaling and CR downscaling), one with JAR 4 taps and one with BC75AR upscaling Which one is NNEDI and which are the others?

Pic A
Pic B
Pic C

All Pics are from a HQ-720p Source.

Last edited by daglax; 27th March 2014 at 18:41.
daglax is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 11:59.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.