Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
30th May 2019, 11:32 | #1681 | Link | |
Artem S. Tashkinov
Join Date: Dec 2006
Posts: 345
|
Quote:
AV1 is not a codec for masses. It's a very special codec for content delivery. That's it. And that makes it and its discussion kinda worthless. And VVC is already miles better/faster/more effective than AV1. |
|
30th May 2019, 12:25 | #1683 | Link |
Registered User
Join Date: Aug 2009
Posts: 201
|
I'm not sure it's fair to say VP9 and AV1 were designed purely for those use cases, it's just that that's one of the easiest niches to win if you're a next-gen codec where encoding time is traded for smaller size.
That lets them use it profitably from day one while expanding further into other niches. It probably has impacts on how much effort goes into multithreading or other features that this use case doesn't need. Libvpx seems to equal x265, subjectively, objectively and in encoding time in the recent MSU study (and both are near the head of the pack) The argument now seems to be that it's the rate control that makes it unsuitable for many users despite good showing in test scenarios. But libvpx having bad rate control is a rather different claim than the VP9 format being 10x slower than it should be and therefore a disaster. |
30th May 2019, 12:35 | #1684 | Link |
Registered User
Join Date: Dec 2002
Posts: 5,565
|
Doom9 crowd can't exactly live on the potential of a spec. It needs (free/cheap) access to a well-rounded encoder implementation with a sweet spot on bitrate distribution/AQ/speed. x264 and x265 meet those demands. libvpx? Not so much.
|
30th May 2019, 13:41 | #1686 | Link | |||
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,346
|
Quote:
Quote:
Dream on. A MPEG reference encoder has never won any price in any of those categories. Quote:
--- It really all comes down to the inherent complexity of newer codecs. As it gets more and more impractical to encode them due to the speed, less and less people and smaller companies are going to use them, and the entire ecosystem shifts over to only the bigger players that have the volume to host huge encoding farms. Even if there was a perfect free AV1 encoder, it would still be slow. There is no going fast without sacrificing quality or compression, at which point you eventually cross into the domain of already established codecs, and you lose your reason to even use the newer codec in the first place. Hence, development is no longer targeting individuals or small companies.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders Last edited by nevcairiel; 30th May 2019 at 13:54. |
|||
30th May 2019, 14:01 | #1687 | Link |
Registered User
Join Date: Dec 2002
Posts: 5,565
|
I'm not judging, just saying how it is. People see the advertisement for the new shiny codec and can't wait to profit from "50% less bitrate". They need to reduce their hopes. As you say AV1 today isn't for them and maybe never will. Of course I wouldn't mind to be proved wrong in the future.
|
30th May 2019, 22:20 | #1688 | Link | ||
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
|
Quote:
But high quality encoding doesn't work with chunks of a few seconds. Encoding longer sequences allows for IDRs and shot changes and more aggressive VBV use. YouTube can have quite a bit of keyframe strobing with difficult content for these reasons. YouTube quality wouldn't be acceptable for lots of premium content. There has never been a VP9 encoder that offers sufficient quality OR performance for premium content, and there isn' one for AV1 yet either. I don't think this is because the VP9 bitstream wasn't capable of it, it's just that no one wrote an encoder with good psychovisual tuning, intra-frame parallelism, and other stuff. Encoders are a real chicken-egg problem. There needs to be enough companies willing to pay for better encoders to create a competitive market so that companies work hard to make better encoders than each other. That market never emerged for VP9, so libvpx never saw the kind of quality and performance improvement of, say, the H.264 or HEVC reference encoders to the best available commercial encoders. There is clearly more interest in AV1 than there ever was for VP9, and more quality innovation already than VP9 has had to date. Which is very promising. Quote:
VP9/s niche is user-generated non-DRM social media content. And in practice, H.264 would have offered at least equivalent quality at equal encoding time due to faster and more psychovisually tuned encoders. An x264 running at veryslow speed is going to be the same speed as a quite low-complexity VP9 encoder, especially on high-core systems. The quality comparison for high volume use are done at quality @ bitrate @ time. |
||
30th May 2019, 22:28 | #1689 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
|
Quote:
AV1 wasn't ever promised to be more than 20% better, and even that was in mean PSNR, not psychovisually. AV1 encoders haven't demonstrated any advantage over HEVC with subjective quality, even with the reference encoders. The VPx code base started VERY heavily PSNR-tuned, which may be way it does well with that today. I don't have any reason to think that is a limitation of the AV1 bitstream versus just the libaom encoder. But the general case of "AV1 can deliver the same subjective quality at meaningfully lower bitrates than HEVC" has yet to be demonstrated. |
|
30th May 2019, 23:19 | #1690 | Link |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,346
|
Netflix uses it for mobile devices, with the EVE encoder I believe, which they have found to be equal to x265 quality at the time, if not slightly better in some cases (last blog on that was from December)
__________________
LAV Filters - open source ffmpeg based media splitter and decoders Last edited by nevcairiel; 30th May 2019 at 23:25. |
31st May 2019, 02:10 | #1692 | Link | |
Registered User
Join Date: Apr 2004
Posts: 1,315
|
Quote:
Beamr HEVC 2 Mbps does visually better than AV1 3 Mbps. Both VP9 and AV1 are heavily optimizied for specific metrics. Beamr 2 Mbps AV1 3 Mbps |
|
31st May 2019, 09:55 | #1693 | Link | |
Registered User
Join Date: Jan 2019
Posts: 9
|
Quote:
Here's a quick&dirty test, using an admittedly peculiar content source. Lena_std.tif from lenna.org, RGB24 converted to 8-bit 4:2:0, both downscaled and then upscaled two times each (i.e. 256x256, 1024x1024) into 250 frames long videos. All encoders are the latest versions available today on Wolfberry's public GDrive. All encoding times are reported by Win10's PowerShell: Measure-Command {start-process process -argumentlist "args" -Wait}, and expressed in milliseconds. Code:
x264 common settings: --crf 15 --preset veryslow --tune stillimage 405 --threads 1: 03130.76 || 03142.22 || 03119.85 (03130.94) 405 --threads 2: 02101.52 || 02105.54 || 02105.00 (02104.02) 420 --threads 1: 26487.85 || 27508.10 || 26479.85 (26825.27) 420 --threads 2: 16336.86 || 15308.29 || 15327.17 (15657.44) x265 common settings: --crf 15 --preset veryslow --frame-threads 1 --lookahead-slices 1 505 --no-wpp: 05160.82 || 05161.18 || 05154.55 (05158.85) 505 --wpp: 04131.71 || 04142.67 || 04131.78 (04135.39) 520 --no-wpp: 68123.55 || 68134.36 || 67103.71 (67787.21) 520 --wpp: 36649.91 || 37674.62 || 34628.27 (36317.60) x265 common settings: --crf 15 --preset placebo --cu-lossless --rd-refine --tskip -qg-size 64 --ref 6 --bframes 16 --me sea --subme 7 --frame-threads 1 --lookahead-slices 1 505 --no-wpp: 063053.81 || 061024.00 || 062028.34 (062035.38) 505 --wpp: 039684.32 || 038664.65 || 038684.41 (039011.13) 520 --no-wpp: 796345.10 || 796345.10 || 796345.10 (796345.10) 520 --wpp: 235701.00 || 235701.00 || 235701.00 (235701.00) libvpx common settings: --lag-in-frames=25 --passes=2 --end-usage=q --cq-level=20 --good --cpu-used=0 --kf-max-dist=250 --auto-alt-ref=6 --tile-rows=0 --enable-tpl=1 --frame-parallel=0 --ivf 905 --tile-columns=0 --row-mt=0 --threads=1: 05167.92 || 05164.62 || 05167.18 (05166.57) 905 --tile-columns=5 --row-mt=1 --threads=2: 04133.40 || 04143.00 || 04147.51 (04141.44) 920 --tile-columns=0 --row-mt=0 --threads=1: 50871.23 || 49853.74 || 49845.72 (50190.23) 920 --tile-columns=5 --row-mt=1 --threads=2: 31568.60 || 31571.60 || 28518.96 (30553.05) XAB .. output type X .. 4 for x264, 5 for x265, 9 for libvpx AB .. 05 for 256x256, 20 for 1024x1024 Only one x265 beyondplacebo encode at each resolution due to appalling performance. The average values are reported in brackets, for each output type and encoding settings. 1. x264 is at least twice faster than either x265 or lipvpx, at comparable encoding complexity. 2. x265 parallelizes better than libvpx as encoding complexity increases. 3. x265 versylow is at best comparable with the slowest libvpx settings, lagging behind as the resolution increases. 4. x265 beyondplacebo is significantly slower than the slowest libvpx settings. As mentioned above, this is just one possible comparison. The enthusiasts should definitely do their own, and only then report whichever encoder to be the speed demon. It's very hard to conceive that chunk-based libvpx (the obvious caveats aside: logistical hassle and quality issues) is ever slower than x265 regardless of the hardware footprint, each of them at its highest encoding complexity. But instead of testing themselves (with whatever source they please, at whatever resolution and bit depth, on whatever encoding machine), people prefer to reiterate ad absurdum that libvpx is always slower than x265. And it was, indeed, years ago. Last edited by Asilurr; 31st May 2019 at 10:02. |
|
31st May 2019, 13:30 | #1694 | Link | |||
Registered User
Join Date: Jan 2007
Posts: 729
|
Quote:
Quote:
Quote:
Last edited by mandarinka; 31st May 2019 at 13:44. |
|||
31st May 2019, 13:50 | #1695 | Link | |
Registered User
Join Date: Dec 2002
Posts: 5,565
|
Quote:
You could do a fast first pass for scenechange detection and vbv estimation and send the chunks along with that info. |
|
31st May 2019, 16:37 | #1696 | Link | |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,346
|
Quote:
Its just how it goes. And for UHD Blu-ray discs, they can just throw massive bitrates at it to solve any such issues. As said above, it all comes back to the same thing: If you want a codec for a use-case that noone else focuses on, then do the work, instead of the complaining.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
|
31st May 2019, 19:26 | #1697 | Link | |
Registered User
Join Date: Jan 2007
Posts: 729
|
Quote:
Also you can't really expect users to jump to programming and so that shouldn't really be thought of as a solution. Even if with the open source principles, you are actually entitled to it. But, it's not like, easy to do. Second, users want to do the using, not switching roles to open source programmers. It's probably easier to just not use such software (and use x264, x265, whatever is better). Though with this "open formats" movement/fandom/advocacy, there is this curious anomaly that people are so strong subscribers to the concept that they want to use it even if it "is not for them" at all... well there are lots of weird layers to these debates. I think this superstrong mindshare that bends people's views is another reason why pointing out the technical problems should keep being done. And not brushed aside by "it's not for you" arguments. The enthusiasts al over the internets seem to think "it" is for them, by the look of it. Maybe sometimes they also think all those winning compression tests and netflix blogs are also for them (while perhaps they also aren't?). Last edited by mandarinka; 31st May 2019 at 19:33. |
|
31st May 2019, 21:39 | #1698 | Link | |
Registered User
Join Date: May 2005
Location: Swansea, Wales, UK
Posts: 196
|
Quote:
The guy under the name xiph_mont was working on a new audio codec called Ghost when Daala started up in earnest and it was abandoned - and he later moved to AV1 development with most of those that worked on Daala. |
|
1st June 2019, 07:39 | #1699 | Link |
Registered User
Join Date: Aug 2009
Posts: 201
|
Xiphmont was employed by Red Hat and Mozilla, both companies that have a business model (and a mission) incompatible with codec licenses and therefore strong motivation for "me-too" codecs that they can integrate properly.
(Which reminds me, VP9 has been the default codec choice for webRTC in Firefox and Chrome for a couple of years. I can't quickly find any stats on what kind of usage this gets. Originally the browsers agreed a compromise of supporting both VP8 and H.264 baseline and Firefox shipped that via a licencing hack where Cisco provide the binary blob and it doesn't cost them anything in licence fees because they were already at the annual cap.) |
2nd June 2019, 13:56 | #1700 | Link |
Registered User
Join Date: Mar 2002
Posts: 863
|
What's the difference between deltaq and AQ in aomenc? Deltaq changes the quantizer of the frames, and AQ changes the quantizers of the blocks within the frames, or am i completely wrong?
Also, what is tpl-model, what does it do? |
|
|