View Single Post
Old 17th October 2018, 10:57   #53321  |  Link
Manni
Registered User
 
Join Date: Jul 2014
Posts: 942
Quote:
Originally Posted by huhn View Post
that's more than odd. there is nothing that should be able to push your CPU to full load when hardware decoding is used.
Thanks, you were absolutely right, something was wrong when I took my power readings: I didn't notice that Teamviewer was running in the background . That's why the CPU load was abnormally high. Because that didn't impact on my rendering times, I didn't think of it.

The GPU clock was maxed though, and the task manager performance monitor was correct by the way (checked with GPU-Z and CPU-Z). It usually is fairly reliable here.

I did more tests without Teamviewer running in the background (!), using Pacific Rim for a worst case scenario (as it's a 16/9 UHD HDR movie), the readings are normal (I think). I also used a kill-o-watt to measure the actual power use for each mode (rough average). Idle use is 93W.

DVXA2 NT: 18ms CPU 12% GPU 50% 230W
DVXA2 CB: 21ms CPU 30% GPU 75% 320W

D3D11 NT: 18ms CPU 15% GPU 55% 270W
D3D11 CB: 21ms CPU 30% GPU 75% 320W

I wish it was possible to use DXVA2 native as it's clearly the most power efficient mode but if I remember correctly there are banding issues with it. Both CB modes seem to use the same amount of power roughly, so unless there is a very strong reason to use DXVA2 CB, it looks like I'm going to use D3D11 native with manual picture shift, as I'll lose the black bars detection. I can't really justify 50W just for that convenience. I'll just have to select thelens memory on my iPad for each film, that's not too bad.

I have everything enabled with pixel shader with peak brightness measurements, restore highlights, no compromise, NGU High chroma upscaling, plus a 3D LUT, plus some enhancements, so it's a maxed out scenario. Still Asmodian's times with a 1050ti seem significantly lower, so I might try to disable some enhancements later to see if I get similar results.

Thanks again for pointing this out, it doesn't make any difference in real use but my results were wrong, as you pointed out.

Quote:
Originally Posted by ryrynz View Post
He's saying clock speed is dynamic and ideally you need to log gpu speed and load for a good minute or so and calculate the average from that. Simpler tasks can show higher render times since the GPU can clock a lot lower.

Madshi, any chance we could get an average rendering statistic in the OSD? Ideally being able to see clock speeds on the OSD would be useful too.

D3D11 is quite a bit more efficient on my 1060.
Thanks for the translation/explanation. I'm aware that clocks are dynamic, and I had checked that, it wasn't the explanation

I agree that an average rendering stat in the OSD would be great.

Not sure why your D3D11 mode is more efficient, there is little to no difference here with CB and DXVA2 is significantly more efficient in native.
__________________
Win11 Pro x64 b23H2
Ryzen 5950X@4.5Ghz 32Gb@3600 Zotac 3090 24Gb 551.33
madVR/LAV/jRiver/MyMovies/CMC
Denon X8500HA>HD Fury VRRoom>TCL 55C805K

Last edited by Manni; 17th October 2018 at 11:07.
Manni is offline   Reply With Quote