Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)

Reply
 
Thread Tools Search this Thread Display Modes
Old 21st June 2014, 03:56   #1001  |  Link
nandaku2
Registered User
 
Join Date: Jan 2014
Posts: 45
Quote:
Originally Posted by a5180007 View Post
Hi Tom, I sure will do when you say psy-rd is ready for testing. But the source is so grainy I'm not convinced psy-rd will bring any benefit at such compression. At least in x264 it does not : it removes a lot of details to put back grain noise. And before testing psy-rd, I wanted to make sure standard SAD rdo was as good as x264 in grain and detail retention.

Are the algorithms for placing I-frames so different in x264 and x265? Why would x265 put one third less I-frames with both scenecuts at 40%? This biases comparison.

EDIT : and it makes single frame comparison with x264 even more pointless.
EDIT 2 : Got it. x264 and x265 --bframes 3 --b-adapt 1 return the same amount of I-frames. Placement and numbers of P and B frames are still totally different though.
The lookahead cost function in x265 is different than x264. x265 evaluates more intra modes and some changes in bidir modes (as compared to x264). This makes the slice decisions different. You could use --b-adapt 0 if you need all slice decisions same across x265 and x264.
nandaku2 is offline   Reply With Quote
Old 23rd June 2014, 08:21   #1002  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
The latest patch is supposed to reduce the RAM utilization. May be interesting to check v1.1+193 against an older build regarding encodability of 4K video even with a 32-bit build.
__

Stable at 1.6 GB with preset "slow", looks promising.
__

Preset "slower" requires about 1.9 GB RAM (Private Bytes: 2,040,092 KB / Working Set: 1,768,604 KB – at frame 30); the patch must have made it more dependent on the complexity, instead of assuming a maximum always.

I believe there may still be headroom with more efforts, but a further reduction will probably be harder to achieve (not without some deep reorganization and allocation "smartness"). For now, even on almost "obsolete" 32-bit OS, 4K encoding will be possible with preset "slow", maybe even "slower".
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid

Last edited by LigH; 23rd June 2014 at 12:38.
LigH is offline   Reply With Quote
Old 24th June 2014, 02:04   #1003  |  Link
foxyshadis
ангел смерти
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Lost
Posts: 9,558
Quote:
Originally Posted by upyzl View Post
@benwaggoner & @foxyshadis

well, to be specific

1) now at same medium bitrate(medium bitrate means, e.g. x264 crf22), x264-10bit act much better than x264-8bit in prevent banding(especially in dark flat area because of gamma compression) if Source has no banding -- of course benefit from high interal bit depth, also have positive to prevent other artifacts -- and now x264-10bit optimize is good enough even at [same encoding time & bitrate], it could still act better quality than x264-8bit

2) x265 8bpp now also use 8bit internal, I should use x265 16bpp for high bit internal -- x265 works like x264 in this regard

3) until now, H.264/AVC 10bit-depth has low compatibility. e.g. we could not use Hardware acceleration for 10bit video; mobile device/PS3 like hardware device(diff from PC could use x86-CPU for generic software decode and almost ignore decode performance and power consumption) playing 10bit video is much difficulty and unfriendly; seems video editing fields is the same(e.g. Adobe Premiere is not support for H.264 10bit video). I'm very worry about HEVC/H.265 age will be the same...

4) and...for [8bit input] and high-bit internal, if use 8bit output rather than 10bit output, should be smaller size at same quality?(I'm Not expert on this)

----
so, I'm interest in 8bit in/output and high-bit internal, especially in encoding
seek for lowest bitrate for same high quality is eternal topic for video compression, and I am, but I also care about a degree of compatibility (and encoding time)...
It doesn't matter what you put in or take out, the compatibility revolves entirely around the internal bit-depth. Whether the future brings wider 10-bit compatibility is entirely unknown, we just have to hope that since it's included in the base spec, some hardware makers will take advantage of that. So far the major GPU makers (Intel, AMD, nVidia, PowerVR) are barely incorporating support for 8-bit HEVC.

Using 10-bit internal with 8-bit input doesn't seem to have the same advantage over plain 8-bit in x265 as with x264. (And even that is fairly small.) I'm not sure if that's the encoder, or the standard, but we'll have to see how it evolves. Maybe HEVC just doesn't cause as much banding as AVC in general at 8-bit?

With respect to size, it doesn't matter if what you output, it's still the same file (unless you're re-encoding) and internally every calculation is done at the internal bit-depth until the final output, when it can be left alone or downsampled. Even with 8-bit input, Main 10 with 16-bit output instead of 8-bit dithered might look better simply due to not rounding as early. (No decoder currently produces float output, although they could if they wanted.) I don't know if anyone's really tested that, and you'd need a decent monitor to tell the difference, and right now I don't have one. It's an interesting area to investigate.

It definitely will help if you're doing any shader processing on the output; MadVR will accept up to 16-bit and won't ever drop down until it outputs to the screen.

Last edited by foxyshadis; 24th June 2014 at 02:09.
foxyshadis is offline   Reply With Quote
Old 24th June 2014, 07:15   #1004  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
Impressive results with the command line
Code:
--crf 30 --preset slower --aq-mode 2 --aq-strength 1.5 --psy-rd 0.3
Even the lawn in the background, which used to lose a lot of detail, is now satisfyingly persistent.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 24th June 2014, 09:19   #1005  |  Link
zerowalker
Registered User
 
Join Date: Jul 2011
Posts: 1,121
Thought of just asking, how is the Psy-rd getting a long?

I know it had issues and such a while back, is that still the case, or are you making progression?
zerowalker is offline   Reply With Quote
Old 24th June 2014, 09:59   #1006  |  Link
Procrastinating
Registered User
 
Procrastinating's Avatar
 
Join Date: Aug 2013
Posts: 71
They've been making plenty of progress, but as far as I know it's not complete to the extent that they want yet. It's good enough now that it's better than not having it in a number of cases though.
Procrastinating is offline   Reply With Quote
Old 24th June 2014, 10:01   #1007  |  Link
zerowalker
Registered User
 
Join Date: Jul 2011
Posts: 1,121
Ah that's nice to hear.
Has it improved the "Blur" x265 tends to give details?
zerowalker is offline   Reply With Quote
Old 24th June 2014, 10:32   #1008  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
Carefully used, together with adaptive quantization, Psy-RDO has the potential to preserve more detail than in previous builds. Even though it is not yet completely correct.

They will tell us when they did it for certain...

In the meantime, enjoy another 4K encode demonstrating the efficiency (same options as above); x264 used to fail especially in the sky at similar bitrates (noticably worse behaviour at up to 3 times the bitrate of the x265 sample in an earlier 1080p test).
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid

Last edited by LigH; 24th June 2014 at 10:38.
LigH is offline   Reply With Quote
Old 24th June 2014, 12:52   #1009  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 7,806
Quote:
Originally Posted by LigH View Post
Carefully used, together with adaptive quantization, Psy-RDO has the potential to preserve more detail than in previous builds. Even though it is not yet completely correct.

They will tell us when they did it for certain...

In the meantime, enjoy another 4K encode demonstrating the efficiency (same options as above); x264 used to fail especially in the sky at similar bitrates (noticably worse behaviour at up to 3 times the bitrate of the x265 sample in an earlier 1080p test).
May I ask what cpu do you have? My Xeon 8c / 16t 2.9Ghz has troubles to maintain smooth frame rate during playback in MPC-HC 1.7.5 (EVR mode) . Video chokes at the very beginning and at the very end while playing in loop mode. Cpu usage is at ~66%. With MadVR enabled it is even worse. Probably my R4850 512MB does not have enough memory for 4K.


Last edited by Atak_Snajpera; 24th June 2014 at 13:02.
Atak_Snajpera is offline   Reply With Quote
Old 24th June 2014, 13:03   #1010  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
My equipment here is way below yours: An AMD Phenom-II X4 is too slow to play this 4K video in realtime, and madVR is no option anyway with a GeForce 9600.

At home I have a Phenom-II X6 and GTS 450 available, that won't be fast enough either, I believe.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 24th June 2014, 13:20   #1011  |  Link
zerowalker
Registered User
 
Join Date: Jul 2011
Posts: 1,121
Okay thank for the fast update info

As for the Video LigH posted, i can tell you that i have no way of playing it in realtime.

And i have a quite good PC (i5 760 @4Ghz), and it goes to 100% and it's nowhere near it's original speed.
zerowalker is offline   Reply With Quote
Old 24th June 2014, 13:40   #1012  |  Link
EncodedMango
Registered User
 
Join Date: Jun 2013
Posts: 65
And I thought it didn't work because I tried it on a laptop.

EDIT: Just to clarify, this is x265 decoding speed/cost at present which is causing this, right?

Last edited by EncodedMango; 24th June 2014 at 13:51.
EncodedMango is offline   Reply With Quote
Old 24th June 2014, 14:09   #1013  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
These clips are encoded with a rather high complexity, and they are to be played with 50 fps. I am not surprised that decoding them is too elaborate for realtime playback.

Realtime playback of less complex 25 fps 4K video would be possible with current hardware and decoders.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 24th June 2014, 14:35   #1014  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,342
Note when testing H.265 playback, you should most definitely use 64-bit versions of the player and decoder, as they are up to 50-100% faster, especially on 4K content (at least for anything FFmpeg based, like LAV/MPC-HC/etc.)
A lot of the decoder assembly is not compatible with 32-bit due to its complexity (and because the developers didn't want to spend time making it even more complex by allowing 32-bit support).
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 24th June 2014 at 14:40.
nevcairiel is online now   Reply With Quote
Old 24th June 2014, 15:00   #1015  |  Link
fumoffu
Registered User
 
Join Date: May 2013
Posts: 90
64bits doesn't help much in this case.
Tested on 4core i5 @4Ghz and 1GB video memory - nowhere near smooth playback. MPC-HC nightly 1.7.5.146 was using less then 300MB GPU memory and MPC-BE 1.4.2 almost 700MB (I have 1680x1050 monitors). I wonder if number of CPU threads have any effect on video memory required? It shoudn't right? Maybe I'll test it later. Also if you use MPC you can save like 50MB by changing the number of EVR Buffers from default 5 to 4.

Last edited by fumoffu; 24th June 2014 at 15:04.
fumoffu is offline   Reply With Quote
Old 24th June 2014, 16:24   #1016  |  Link
x265_Project
Guest
 
Posts: n/a
Quote:
Originally Posted by zerowalker View Post
Thought of just asking, how is the Psy-rd getting a long?

I know it had issues and such a while back, is that still the case, or are you making progression?
Work continues. Expect more updates this week.
  Reply With Quote
Old 24th June 2014, 21:20   #1017  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,750
Quote:
Originally Posted by foxyshadis View Post
It doesn't matter what you put in or take out, the compatibility revolves entirely around the internal bit-depth. Whether the future brings wider 10-bit compatibility is entirely unknown, we just have to hope that since it's included in the base spec, some hardware makers will take advantage of that. So far the major GPU makers (Intel, AMD, nVidia, PowerVR) are barely incorporating support for 8-bit HEVC.
We are seeing some TV players support internal 10-bit decode, like the latest Samsung UHD TVs. They can play back HEVC up to 2160p60 10-bit. But not H.264 High 10.

Quote:
Using 10-bit internal with 8-bit input doesn't seem to have the same advantage over plain 8-bit in x265 as with x264. (And even that is fairly small.) I'm not sure if that's the encoder, or the standard, but we'll have to see how it evolves. Maybe HEVC just doesn't cause as much banding as AVC in general at 8-bit?
It's by spec; HEVC does 8-bit better than H.264 did, so there's no real reason to encode 8-bit sources in Main 10.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 25th June 2014, 11:47   #1018  |  Link
upyzl
zj262144
 
upyzl's Avatar
 
Join Date: Sep 2010
Posts: 105
Quote:
Originally Posted by foxyshadis View Post
It doesn't matter what you put in or take out, the compatibility revolves entirely around the internal bit-depth. Whether the future brings wider 10-bit compatibility is entirely unknown, we just have to hope that since it's included in the base spec, some hardware makers will take advantage of that. So far the major GPU makers (Intel, AMD, nVidia, PowerVR) are barely incorporating support for 8-bit HEVC.
Maybe it's a little too early to talk about Hardware HEVC support now...
Quote:
Originally Posted by benwaggoner View Post
We are seeing some TV players support internal 10-bit decode, like the latest Samsung UHD TVs. They can play back HEVC up to 2160p60 10-bit. But not H.264 High 10.
Good to hear.
hope HEVC(8&10bit) Hardware support could reach today's as AVC-8bit in 2~3 years
Quote:
Originally Posted by foxyshadis View Post
Using 10-bit internal with 8-bit input doesn't seem to have the same advantage over plain 8-bit in x265 as with x264. (And even that is fairly small.) I'm not sure if that's the encoder, or the standard, but we'll have to see how it evolves. Maybe HEVC just doesn't cause as much banding as AVC in general at 8-bit?
Quote:
Originally Posted by benwaggoner View Post
It's by spec; HEVC does 8-bit better than H.264 did, so there's no real reason to encode 8-bit sources in Main 10.
really I'm not familar with HEVC spec, I maybe choose testing to verify... but definitely I think it's not proper time to test whether x265-8bit could handle as good as x264-10bit currently(mainly in middle-high bitrate for quite high quality encoding, I know in low bitrate x265 win completely), I may test when x265 is good for that
Quote:
Originally Posted by foxyshadis View Post
With respect to size, it doesn't matter if what you output, it's still the same file (unless you're re-encoding) and internally every calculation is done at the internal bit-depth until the final output, when it can be left alone or downsampled. Even with 8-bit input, Main 10 with 16-bit output instead of 8-bit dithered might look better simply due to not rounding as early. (No decoder currently produces float output, although they could if they wanted.) I don't know if anyone's really tested that, and you'd need a decent monitor to tell the difference, and right now I don't have one. It's an interesting area to investigate.
maybe I should ignore it...even if it really could reduce/save size, there's few people could identify different(of course I've no decent monitor)... hope future somebody could solve that
Quote:
Originally Posted by foxyshadis View Post
It definitely will help if you're doing any shader processing on the output; MadVR will accept up to 16-bit and won't ever drop down until it outputs to the screen.
Yes, in fact I just do.

last, thank you all for the patient replys
__________________
MPC-HC 1.7.8 / LAV Filters 0.64+ (tMod) / XySubFilter 3.1.0.705 / madVR 0.87.14

Direct264 Mod (src & win32 builds): code.google.com/p/direct264umod (maybe outdated)

Last edited by upyzl; 25th June 2014 at 11:55.
upyzl is offline   Reply With Quote
Old 25th June 2014, 19:55   #1019  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Quote:
Originally Posted by benwaggoner View Post
We are seeing some TV players support internal 10-bit decode, like the latest Samsung UHD TVs. They can play back HEVC up to 2160p60 10-bit. But not H.264 High 10.


It's by spec; HEVC does 8-bit better than H.264 did, so there's no real reason to encode 8-bit sources in Main 10.
New Sony 4K TVs also support 10bit HEVC.
kolak is offline   Reply With Quote
Old 25th June 2014, 21:54   #1020  |  Link
Motenai Yoda
Registered User
 
Motenai Yoda's Avatar
 
Join Date: Jan 2010
Posts: 709
Quote:
Originally Posted by foxyshadis View Post
No decoder currently produces float output, although they could if they wanted.
if mantissa is 10bit (16bit float) or less then I think will be same/worst than 10bit integer, coz above 511 up to 1023 exponent should be 0*, maybe it will help a bit for low levels.
*that was wrong, exponent should be 9+15.

Quote:
Originally Posted by benwaggoner View Post
It's by spec; HEVC does 8-bit better than H.264 did, so there's no real reason to encode 8-bit sources in Main 10.
according to my tests Main10 give slightly better results than Main with 8-bit sources.

just a question, with --input-depth 16 how it will be reduced to 10bit? truncated? rounded? dithered?
__________________
powered by Google Translator

Last edited by Motenai Yoda; 27th June 2014 at 20:00.
Motenai Yoda is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 14:56.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.