View Single Post
Old 15th May 2015, 09:59   #30021  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by XMonarchY View Post
Its difficult to explain, but I do know it for a fact. When I made Rec.709 3DLUT I had my TV's setting for Flesh Tone set to 0, but I accidentally left Flash Tone set to -15 when I created Rec.601 3DLUT. That creates an obvious difference when I switch between Rec.709 and Rec.601 3DLUT's using identical TV settings. Rec.709 line is used for SMPTE C content for sure, at least in 88.5.
Ok, let's do some more tests:

1) Leave all your settings just as they are, but rename one of the 3dlut files to something else (e.g. add "dummy" to the file name). madVR should then complain about the missing file. If you do this with a Rect.601 video file, then which file do you have to remain to make madVR complain?

2) If you toggle through the primaries (Ctrl+Alt+Shift+P), does that make madVR load the correct 3dlut?

Quote:
Originally Posted by aufkrawall View Post
It's not very time consuming, and if one wants better quality it's not really a sacrifice.
Quote:
Originally Posted by omarank View Post
No, you may rest assured that we won’t be bored with the testing. The subsequent rounds of testing for the same feature actually help it evolve to become perfect.
Good to hear - thanks! So Shiandow, bring on the next version of your Deband script!

Quote:
Originally Posted by aufkrawall View Post
However, have we seen yet example images that show a real advantage for Shiandow's current implementation?
I for myself haven't found such an example yet. Higher threshould values with bad sources can be nice because of that deblocking effect, but it vanishes too much detail with good sources.
Well, to be honest, I'm kinda glad that it appears to be hard for Shiandow to create an algorithm which beats mine, it shows that my algorithm isn't half bad. But as long as he has ideas on how to improve his algorithm, let him keep trying. Maybe we'll and up with a superior solution sooner or later.

FWIW, maybe I could improve my "high" preset, too. I have the "angle" feature for "low" and "med", which is currently disabled for "high". I spent a lot more time developing the low/med algos. So maybe there's some room left in my algorithm to improve for "high". Sadly, I don't have a lot of time left for madVR at the moment. So it'll have to wait...

Quote:
Originally Posted by XMonarchY View Post
88.7 works wonderful for me! I get no frame drops, even when I use Chroma Upscaling SuperRes Filter and also enable SuperRes in Image Refinement (High setting). Hell, I even enabled Octuple settings and everything is peachy! Shiandow's debanding and new setting (0.2 for threshold and 1 for detail level) are also better. I used to get obvious big shimmering squares (NOT dithering) on darker grays in Mid/Low-HQ content that were not present when only the original debanding was used. With 88.7 and new Shaindow's settings, those shimmering squares are now a lot smaller. Sometimes I think the original de-banding is still better, not as noisy.
Ok, thanks for your feedback!

Quote:
Originally Posted by tickled_pink View Post
Stupid(?) question - When testing deband algos, we should check one or the other, correct? Leaving both checked is effectively using both algos simultaneously, or am I missing something?
Correct.

Quote:
Originally Posted by cyberbeing View Post
This version appears to have broken D3D11 entirely on my Win7 GTX770 system, or rather it never activates and madVR always uses D3D9. It works when I reset madVR to defaults though, so it must be something related to my settings/profiles, but so far I've not been able to figure out which causes it.
Quote:
Originally Posted by daert View Post
In the latest build (0.88.7) d3d11 is not enabled at all even if I check the option. The OSD shows always "(new path)" both in windowed mode and FSE.
Quote:
Originally Posted by Moony349 View Post
I don't think DX11 is working for me after updating to 88.7.

FSE mode says "fullscreen exclusive mode new path" instead of FSE 10 bit.

Going back to 88.5 with same settings fixed issue.
There are 2 possible reasons for why the D3D11 path might not be activated:

1) Either you have desktop composition disabled (only possible on Windows 7).
2) Or you have the number of video frames which shall be presented in advance set to 16. I've just found out after releasing v0.88.7 that 15 is that max I can do. Using 16 means the D3D11 device creation fails. So lower this option to 14 (15 isn't currently supported), and D3D11 should start working again.

Quote:
Originally Posted by x7007 View Post
Should I use native display bitrate 8bit or 10bit ? I'm using the D3D11.
Nobody can tell you. Depends on your display. Zoom the "smallramp.ytp" test pattern (see madTestPatternSource filter download on the first post of this thrad) to fullscreen, then compare 8bit vs 10bit output and use that which looks better to your eyes.

Quote:
Originally Posted by x7007 View Post
What should I choose for debanding Debanding Strength - medium / fade in/out high ? or to enable shaindow's deband threshold 0.5 detail level 2 ? I don't remember if that's the default
That's still up in the air. If you're not sure, use the "old" algorithm (low, medium or high) instead of Shiandow's deband algorithm for now.

Quote:
Originally Posted by x7007 View Post
Should I use quadruple luma resolution ?
If your GPU has the power for that, why not?

Quote:
Originally Posted by x7007 View Post
How come without the Settings.bin file in the folder Madvr keep my settings ? and after that it makes the file, how it remembers all my settings without the settings.bin ?
The settings are also stored in the registry. You can double click on "restore default settings.bat" to clear out all old settings.

Quote:
Originally Posted by daert View Post
In 0.88.6 the refresh rate fix doesn't work. With D3d9 FSE both 24p and 60p work but with d3d11 FSE only 60p works, 24p doesn't work yet.
Let me explain what happens: my monitor has a 23p with a blurred image and a timing of 23.970 so I created a 24p custom resolution with the exact timing of 23.976 and a clear image. When MPC enters in FSE the resolution is switched back to the blurred 23p even if I set my 24p in the nvidia control panel.
I'm using MPC-HC 1.7.8.162 x64, Lav 0.65-2 and Windows 8.1 x64. I have a gtx 660 with 350.12 and an intel q9550.
So your blurred mode is known to Windows as 23p and your "good" mode is known to Windows as 24p? Which modes did you add to the madVR display mode changing list?

Quote:
Originally Posted by Asmodian View Post
It probably changes the error upscale step from bilinear to something else, maybe Jinc3 based on the rendering times?
High quality is Jinc3. Medium quality is SoftCubic50. This is for upscaling the error texture which is part of the SuperRes algorithm.

Quote:
Originally Posted by BetA13 View Post
1. the artefacts are back when using nnedi3, both, in chroma upscaling and image doubling.
It was fine with v0.87.21. (was about kepler gtx670 , win 764bit and 32bit..)
Maybe you need to reapply the fix suggested by cyberbeing, which IIRC was clearing the madVR OpenCL registry storage key and then recreating the kernel with the 64bit madVR build.

Quote:
Originally Posted by BetA13 View Post
2. when using d3d11 mode in FSE/windowed my monitor resolution resets and stuff is cut off..it changes my desktop resolution also and changes teh hz form 60 to 25...
with the normal d3d9 (new path) mode it works fine doh, all 60 hz...
Which modes do you have listed in the madVR display mode changer? And which exact mode does either D3D9 or D3D11 switch to? (Using the same video file)

Quote:
Originally Posted by XRyche View Post
It seems Direct3D11 FSE stopped showing an image during playback for me in this release. Direct3D11 FS Windowed works as well as all modes of Direct 3D9. Direct3D11 FSE was working fine in the previous release.
You mean you just get a black screen? Does the debug OSD work (Ctrl+J)? Do you have 10bit output enabled? Try with 8bit output.

Quote:
Originally Posted by Balthazar2k4 View Post
Thanks Asmodian. I am happy with the settings I am using, but knowing I can do more with a more powerful card is killing me. I am using a Sony VW600ES 4K projector to a 110" screen so every bit of improvement is noticeable. I will most likely go with the Titan X, but wanted to see if there was anything else out there to consider. Guess not.
Next generation GPUs are just around the corner, I'd wait for those, as already suggested by the other users.

Quote:
Originally Posted by James Freeman View Post
0.88.7 introduced it for me.
I had to run "restore default settings", and everything works fine now.
Is it a standard procedure to reset setting with every new build? If so, my bad sorry.
It's not a standard procedure. Not sure which setting was the problem. Anyway, glad to hear it's working fine with default settings now.

Quote:
Originally Posted by James Freeman View Post
Alright, quality wise it is almost impossible to differentiate between the two when they are properly set.
Any one of them will be adequate.
Ok, thanks!

Quote:
Originally Posted by Dogway View Post
Artifacts would be rather low... unless you use something as aggressive as NNEDI3. As you said in the quote of my last post, "image" doubling is done with NNEDI3 for luma and Catmull-Rom for chroma
Only if you configure madVR that way!! In the latest build you can use the same algorithm for both luma and chroma, if you want. So problem solved.

Quote:
Originally Posted by Dogway View Post
Using high bitdepth precision is not going to save you from strong chroma plane degradation. Every conversion behaves like a lowpass filter in the chroma plane
Ok, let's do a simple check. Do this math for me, please:

34 * 0.222 = x
x / 0.222 = y

Do x and y differ? If not, then there's your proof that not *every* conversion behaves like a lowpass filter. Converting between RGB and YCbCr is pretty similar to what I wrote above. It's just a series of multiplications and additions. And there's an *exact* inversion math to go back to the old colorspace. If I used float32 textures, the colorspace conversions would probably be perfectly lossless. Since I'm only using 16bit integer textures, there's a very small quality loss to be expected. But it's nothing like a lowpass filter. A lowpass filter requires that each pixel is influenced by its neighbor pixels. That's simply not the case when doing a color space conversion. The color space conversion might lose 0.00001% of precision, but the precision loss is much lower than any instrument or even our eyes could ever measure.

I'm not sure where you got your ideas from (lowpass filter and all). Those ideas are clearly technically incorrect.

Quote:
Originally Posted by MysteryX View Post
SuperRes is doing a GREAT job on 768p laptop display without too much cost on performance.

On 1080p TV, however, the performance hit is WAY too high!! It goes from 14ms rendering to 39ms. Even with basic upscaling algorithm, it won't work with SVP at 60fps.

Is it normal that there is such a big difference of performance hit with SuperRes between 768p and 1080p?
The key factor will be how many upscaling steps there are. What does the debug OSD (Ctrl+J) say about those 2 situations? What is the exact upscaling chain (e.g. "Nnedi32 > Jinc3 AR" or something like that) in either case?

Quote:
Originally Posted by Dogway View Post
every conversion behaves like a lowpass filter
No, it does not. Maybe there's degradation when using AviSynth, I don't know. Is that where you drew your conclusions from? AviSynth is limited to 8bit, IIRC? AviSynth tests don't tell you how madVR behaves. madVR has a dramatically higher calculation bitdepth than AviSynth.

E.g. 34 * 0.222 = 7.548 -> rounded to 8bit = 8
8 / 0.222 = 36.04 -> rounded to 8bit = 36

So by using 8bit math, a value of 34 after one color conversion step suddenly becomes 36. That's terrible!!! But now let's check what madVR does:

34 * 0.222 = 7.548 -> rounded to 16bit = 494666 / 0x10000
494666 / 0x10000 / 0.222 = 34.0000187 -> rounded to 8bit = 34

You see the difference? And 16bit integer is just the storage format used by madVR to store data between different processing steps. Calculation is actually done in 32bit floating point which has an even higher precision than 16bit integer.
madshi is offline   Reply With Quote