Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > VP9 and AV1

Reply
 
Thread Tools Search this Thread Display Modes
Old 7th November 2016, 19:49   #101  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Quote:
Originally Posted by Jamaika View Post
Test Clare codec AV1 vs X165 is underestimated because preset is good, bpg has veryslow.
Preset for X265 is:
0.ultrafast
1.superfast
2.veryfast
3.faster
4.fast
5.medium (default)
6.slow
7.slower
8.veryslow
9.placebo
Preset for VPX/AOM is:
0. rt + 37% quality
5. good (default) + 50% quality
9. best + max.63% quality
I associate it mistakenly with the parameter VPX "cq-level". So you could set for veryslow quality VPX best + max.63%.

No one probably doesn't know what effect it has value of quality "min-q". The first frame is always a big mistake for added value for codec VPX and AV1.

Interesting is also why each encoder VPX&AOM has a different parameter of quality.
FFmpeg has CRF.
What possesses Adobe plugin 1.003? Must enter function "cq-level". Changing the value of quality means a change bitrate and the number of frames P. 37% is 2frames, 50% is 4 frames, 63% is 6 frames. It is a pity that libvpx isn't such fuction. Approached by a characteristic size of the frame to the X265. Changing good to best doesn't change the size of the bitrate, only the number of reference vectors in frame.
Edit: The function pass 2 is a falsification. There is no dynamic frame B. Analyzer video Elecard doesn't show that it has been implemented.
Huh ?

Quote:
Image compression

All images are compressed losslessly and over a range of qualities for each codec:

BPG:
lossless: bpgenc -m 8 -f 420 -lossless -o [output] [input(PNG)]
between q=3 and q=45: bpgenc -m 8 -f 420 -q $q -o [output] [input(PNG)]

AV1:
lossless: aomenc --passes=2 --lossless=1 -o [output] [input(Y4M)]
between q=5 and q=63: aomenc --passes=2 --end-usage=q --cq-level=$q -o [output] [input(Y4M)]

Daala:
lossless: encoder_example -v 0 -o [output] [input(Y4M)]
between q=5 and q=85: encoder_example -v $q -o [output] [input(Y4M)]

FLIF:
lossless: flif -Q 100 [input(PNG)] [output]
between q=-329 and q=79, with a step of 12: flif -Q $q [input(PNG)] [output]

JPEG2000:
lossless: kdu_compress -no_info Creversible=yes -slope 0 -o [output] -i [input(PPM)]
between q=38912 and q=45056, with a step of 64: kdu_compress -no_info -slope $q -o [output] -i [input(PPM)]

JPEG XR:
lossless: JxrEncApp -d 1 -q 1 -o [output] -i [input(PPM)]
between q=5 and q=85: JxrEncApp -d 1 -q $q -o [output] -i [input(PPM)]

MozJPEG:
lossless: cjpeg -rgb -quality 100 [input(PNG)] > [output]
between q=5 and q=95: cjpeg -quality $q [input(PNG)] > [output]

WebP:
lossless: cwebp -mt -z 9 -lossless -o [output] [input(PNG)]
between q=5 and q=95: cwebp -mt -q $q -o [output] [input(PNG)]
Indeed if the default ist good it would be not the highest preset as bpg uses i wonder why clare decided this
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 7th November 2016 at 20:12.
CruNcher is offline   Reply With Quote
Old 13th November 2016, 19:43   #102  |  Link
Jamaika
Registered User
 
Join Date: Jul 2015
Posts: 696
Quote:
Originally Posted by jq963152 View Post
They said that in order to achieve the 50% increase in efficiency over VP9, they are willing to accept a 40% increase in decoding complexity and a 5 to 10 times higher encoding complexity than VP9, see:
You can see increase in efficiency over the codec VP9. Certainly is an interesting alternative to HEVC codecs. In AV1 is less undulation pixels at low bitrate and better views gray. It can develop. Currently, they have time to improve the decoder LAV 10bit. Does not recognize the files AV1. And most important adding new implementations to encoding speed.
https://www.sendspace.com/filegroup/...FiztpTDNDeyMc9

Last edited by Jamaika; 13th November 2016 at 19:46.
Jamaika is offline   Reply With Quote
Old 16th November 2016, 16:22   #103  |  Link
dapperdan
Registered User
 
Join Date: Aug 2009
Posts: 201
Quote:
Originally Posted by jq963152 View Post
The Alliance for Open Media was also there and gave a presentation on AV1, where they said they are aiming for a 50% increase in efficiency over VP9/H.265 with AV1. They also said that AV1 in it's current state already beats VP9 by 25-30% (with not yet released Google internal tools). They said that in order to achieve the 50% increase in efficiency over VP9, they are willing to accept a 40% increase in decoding complexity and a 5 to 10 times higher encoding complexity than VP9, see:

https://youtu.be/thvSyJN1vsA

They also target a release in the first half of 2017 for AV1.

The results on low bitrate VP9 from Netlfix were very interesting. Shows the power of automated testing of quality. Interesting to see what comes of that work. And good that they're offering to do the same with AV1 as part of the development process.
dapperdan is offline   Reply With Quote
Old 20th November 2016, 16:17   #104  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,336
A codec is not the place to lobby for your politics in format choices. If someone can't compress the image they have, then they are going to use another codec.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 20th November 2016, 16:28   #105  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,336
Quote:
Originally Posted by jq963152 View Post
Where else would that be then?
If they want to use such formats, the codecs exist that can handle all of it. So convince the content providers, its a political debate, not a technical one, and you can't force it on a technical level.

If there is enough demand, then hardware implementations of AV1 will also adopt support for higher chroma. Everything is a matter of demand. If content exists, hardware will come (or content is at least widely planned to roll out).

Quote:
Originally Posted by jq963152 View Post
If you have a 4:2:0 limited range image, you can still encode it in 4:4:4 full range, so what exactly is your point?
That just wastes space and/or degrades quality. An image should be compressed as close to the raw material one has - and if thats 4:2:0 or 4:2:2, which a lot of content is, then don't artifically upscale chroma, just because someone is on a crusade. Cheap upscaling is almost as bad as downscaling in the first place.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 20th November 2016, 17:07   #106  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,889
Quote:
Originally Posted by jq963152 View Post
3. Drop support for limited range (16-235), i.e. please support full range (0-255) only
RGB -> full range YCbCr conversation will result in overshooting so it is no real option.
huhn is offline   Reply With Quote
Old 20th November 2016, 17:43   #107  |  Link
mzso
Registered User
 
Join Date: Oct 2009
Posts: 930
Quote:
Originally Posted by huhn View Post
RGB -> full range YCbCr conversation will result in overshooting so it is no real option.
What do you mean?
mzso is offline   Reply With Quote
Old 20th November 2016, 17:56   #108  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,889
"you can get to big numbers."

which is not possible with limited range.

edit:
Quote:
When performing YCbCr to R ́G ́B ́ con-
version, the resulting R ́G ́B ́ values have a
nominal range of 16–235, with possible occa-
sional excursions into the 0–15 and 236–255
values. This is due to Y and CbCr occasionally
going outside the 16–235 and 16–240 ranges,
respectively, due to video processing and
noise.
http://www.compression.ru/download/a...space/ch03.pdf

full range YCgCo should be fine https://en.wikipedia.org/wiki/YCgCo

Last edited by huhn; 20th November 2016 at 18:01.
huhn is offline   Reply With Quote
Old 20th November 2016, 19:00   #109  |  Link
mzso
Registered User
 
Join Date: Oct 2009
Posts: 930
Quote:
Originally Posted by huhn View Post
"you can get to big numbers."

which is not possible with limited range.

edit:

http://www.compression.ru/download/a...space/ch03.pdf

full range YCgCo should be fine https://en.wikipedia.org/wiki/YCgCo
So basically the gist of it is that a flawed algorithms, from flawed sources produce inferior results. Those outside values are technically invalid, so I don't see why they'd be relevant.

Last edited by mzso; 20th November 2016 at 19:05.
mzso is offline   Reply With Quote
Old 21st November 2016, 01:46   #110  |  Link
Nintendo Maniac 64
Registered User
 
Nintendo Maniac 64's Avatar
 
Join Date: Nov 2009
Location: Northeast Ohio
Posts: 447
Is native YUV support really all that beneficial when using 10bit?
Nintendo Maniac 64 is offline   Reply With Quote
Old 21st November 2016, 08:23   #111  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,889
you need to compress the chroma channel more than the luma for proper picture quality bit deep has nothing to do with that.
huhn is offline   Reply With Quote
Old 21st November 2016, 10:38   #112  |  Link
GTPVHD
Registered User
 
Join Date: Mar 2008
Posts: 175
http://aomedia.org/about-us/

http://www.bbc.co.uk/rd/blog/2016/10...eo-compression

More and more companies join the Alliance for Open Media.
GTPVHD is offline   Reply With Quote
Old 21st November 2016, 11:52   #113  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,336
Quote:
Originally Posted by Nintendo Maniac 64 View Post
Is native YUV support really all that beneficial when using 10bit?
You'll always need to split RGB into another scheme for efficient encoding, because RGB has a lot of redundant information. So splitting it into Luma+Chroma makes encoding much more efficient.
On top of that this allows you to compress chroma more then luma, which plays into the nature of our eyes. With pure RGB you couldn't do that - the best you could do is compress one color channel more then the others, but thats not even close to as efficient.

YCbCr (or YUV in other terms) is what we have to do that. Some other approaches have been brought forward, like YCgCo, but they have not been adopted widely because many existing processing pipelines just know how to work with YCbCr, and the advantages of those suggested alternatives were not that great.

If someone can define a groundbreaking new scheme to split Luma and Chroma in a more efficient way (say a significant difference), reducing even more redundant information while being able to (visually) losslessly re-create the original image, I'm sure there would be industry interest eventually. But so far all the needs we had to modify YCbCr could be done with different transfer matrices to increase the colorspace.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 21st November 2016, 13:01   #114  |  Link
mzso
Registered User
 
Join Date: Oct 2009
Posts: 930
Quote:
Originally Posted by GTPVHD View Post
http://aomedia.org/about-us/

http://www.bbc.co.uk/rd/blog/2016/10...eo-compression

More and more companies join the Alliance for Open Media.
BBC is a good addition. They did research and released papers on what framerate requirements (with what "shutter time" ) are necessary for "perfect" motion representation...
So their input for HFR might be really useful.
mzso is offline   Reply With Quote
Old 21st November 2016, 14:27   #115  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Not only that think about Dirac VC2 Open Broadcast

And they're still standing a lot more on the doorstep waiting to get in
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 21st November 2016 at 14:53.
CruNcher is offline   Reply With Quote
Old 21st November 2016, 21:25   #116  |  Link
Motenai Yoda
Registered User
 
Motenai Yoda's Avatar
 
Join Date: Jan 2010
Posts: 709
Quote:
Originally Posted by mzso View Post
BBC is a good addition. They did research and released papers on what framerate requirements (with what "shutter time" ) are necessary for "perfect" motion representation...
So their input for HFR might be really useful.
NHK did it too and found 120fps (over 100) and 240Hz shutter time to be the right values
http://informationdisplay.org/IDArch...asNextGen.aspx
__________________
powered by Google Translator

Last edited by Motenai Yoda; 21st November 2016 at 21:33.
Motenai Yoda is offline   Reply With Quote
Old 21st November 2016, 22:05   #117  |  Link
mzso
Registered User
 
Join Date: Oct 2009
Posts: 930
Quote:
Originally Posted by Motenai Yoda View Post
NHK did it too and found 120fps (over 100) and 240Hz shutter time to be the right values
http://informationdisplay.org/IDArch...asNextGen.aspx
I remember 250-300 fps. (DVB Scene #44, Richard Salmon)

According to "The Application of Sampling Theory to Television Frame Rate Requirements" the hard limit is at ~700fps. But that might not take into account BFI trickery.

I'd be interested to know if anyone did research which includes interpolation algorithms. It might be that something like 100 interpolated to 300 + BFI would totally fool human perception and would appear completely realistic.
mzso is offline   Reply With Quote
Old 22nd November 2016, 18:59   #118  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Interesting you would need to look into the FRC SOA

This part is really interesting for HVS tuning and reducing the bandwith requirements efficiently

Quote:
There are, however, two forms of un-trackable motion which the brain still does not interpret as smooth, resulting in either the perception of judder, or of multiple imaging.
The eye is unable to track rotating motion, such as the juggling clubs seen in Figure 2, nor can it track multiple motions at the same time.
Thus if in a football match the eye is following the ball, the background behind it may be seen to judder.
But i guess nothing of this is really new especially as everyone of us already experienced these effects (especially the background judder effect following an object in High Motion, it always makes me crazy in subjective frametime analysis thinking latency is to high and something failing) but it could help improve some misconceptions of "motionblur helps everywhere"

We have some really interesting things going on like Asynchrounous Space Warping for VR

And i think VR is the future for this Research to be improved significantly

https://www.youtube.com/watch?v=xpQQmu7vquE

Quote:
I can now max out setting and super sample 1.5. I was sensitive to VR sickness and this has 100% removed it even in things like 360 videos that don't have the ability to move your head in all directions properly.
And it has much major impact then ever before latency becomes so a high issue that it will transform surely also hardware architectures in becoming much much more efficient to avoid nausea efficiently.

And surely we also have to rethink about efficiency costs and energy consumption.

But if i think about HFR in total i get big headaches of the Discussion in certain areas like Cinema especially with a decade old Hollywood trained LFR crowed

And im pretty sure there isn't even yet a HFR Production existing from Hollywood that would adhere to Scientific grounds and rethink how todo it from Ground up right and different in the whole production chain, it will still take years before that will work out at all.
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 22nd November 2016 at 20:41.
CruNcher is offline   Reply With Quote
Old 26th November 2016, 20:27   #119  |  Link
Jamaika
Registered User
 
Join Date: Jul 2015
Posts: 696
No. X264 and BPG suport YCgCo, but BPG's old codec.
VPX/AOM only suport:
--yv12 Input file is YV12
--i420 Input file is I420 (default)
--i422 Input file is I422
--i444 Input file is I444
--i440 Input file is I440
Jamaika is offline   Reply With Quote
Old 9th December 2016, 13:13   #120  |  Link
Phanton_13
Registered User
 
Join Date: May 2002
Posts: 95
I found this comparison betwen beta of AV1, HEVC and AVC by the Fraunhofer Heinrich Hertz Institute:

http://iphome.hhi.de/marpe/download/...VC-PCS2016.pdf

The result is somewat surprising but not much if we think who did it and the development status of AV1, after reading it I also find it of poor quality and with various possibilities for errors plus it's potentially in conflict with other studies.

Last edited by Phanton_13; 9th December 2016 at 13:16.
Phanton_13 is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 04:27.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.