Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
19th February 2018, 18:30 | #441 | Link |
Registered User
Join Date: Dec 2002
Posts: 5,565
|
Like the error message said: Some formats require you to set -strict -1 for y4m output. (But of course aomenc still needs to support the format you choose.)
ffmpeg -i "input" -strict -1 -f yuv4mpegpipe - | aomenc --passes=1 -o "output.webm" - |
19th February 2018, 18:36 | #442 | Link | |
Registered User
Join Date: Oct 2009
Posts: 930
|
Quote:
So RGB can't be piped? |
|
19th February 2018, 18:56 | #444 | Link | |
Registered User
Join Date: Oct 2009
Posts: 930
|
Quote:
(Out of curiosity, what would I need to pass RGB if the encoder supported it? The original command line failed on ffmpeg's side. PS: there's a yuvj pixel format, what does that mean? I know I need to use this when encoding with libx264 to keep it full range.) Last edited by mzso; 19th February 2018 at 19:02. |
|
19th February 2018, 19:01 | #445 | Link |
Registered User
Join Date: Dec 2002
Posts: 5,565
|
The formats with j mark full range content. RGB cannot be passed as y4m. You would have to pass as raw video (headerless) and specificy resolution, fps, colorspace, bitdepth manually if aomenc would support it (x264 does).
|
19th February 2018, 19:05 | #446 | Link |
German doom9/Gleitz SuMo
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
|
As the name (YUV for MPEG) already suggests, it was designed for YUV color space (luminance Y + chrominance differences U/V) with different chroma subsampling configurations, additionally also luminance-only (greyscale) formats. Most efficiently compressing modern video formats rely on this color space because it resembles the preference for luminance in the retina of the human eyes (a magnitude more brightness-sensitive rods than color-sensitive cones), a pre-requisite for making use of chroma subsampling to reduce data rate with hardly obvious loss of resolution.
Basic command-line encoders in test and development generation, like aomenc, may not even contain code to convert generally unsupported RGB color space into supported YUV color space on their own, they rely on the video source serving a supported format. |
19th February 2018, 19:49 | #448 | Link | |
Registered User
Join Date: Oct 2009
Posts: 930
|
Quote:
I'm aware of what YUV is, but it's not your assesment of the retina is wrong, YUV doesn't resemble how the retina works. The retina has RGB cones which ar sensitive to their respective colors (sort of, they have an overlapping sensitivity curve) . And there's no suck thing as separation for brightness sensation, which is nonsensical. The rods are alternative sensors for low light circumstances, which (since there's only one kind) are monochromatic. However human preception is more sensitive towards luminance that's why having it separate is advantageous. (The UV part has no relevance to human perception, it's just an artificial way to store color) |
|
19th February 2018, 22:50 | #450 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,752
|
Quote:
The VPx series has always had speed issues, particularly in multicore environments; most VPx encoding was done with split-and-stitch across multiple hosts, which isn't typical for premium content at feature length. A true production-efficient encoder might require a different basic architecture than what the reference encoder uses. Which is why we have bitstream specs, so people can build their own interoperable encoders and decoders. But I'd expect it'd take a couple of years before we have AV1 encoders that would match performance @ quality of today's best HEVC encoders. And that's not even getting into rate control and psychovisual optimization, which can also take several years to get to a reasonable baseline, and keep on getting refined for the usage life of the codec. We are still seeing significant improvements in MPEG-2 encoders 20+ years in. Last edited by benwaggoner; 19th February 2018 at 22:51. Reason: typo fixed |
|
20th February 2018, 00:10 | #451 | Link | |
Registered User
Join Date: Oct 2009
Posts: 930
|
Quote:
|
|
20th February 2018, 00:21 | #452 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,752
|
Quote:
That was true of HEVC, H.264, and MPEG-2. It always takes a couple of years from an essentially complete bitstream design to commercial grade encoders. Sent from my iPhone using Tapatalk |
|
20th February 2018, 19:26 | #453 | Link | |
Registered User
Join Date: Jun 2016
Posts: 55
|
Quote:
I do expect the default encoder and decoder to be pretty decent with good support however. OPUS is doing a decent job with its default tools. |
|
20th February 2018, 19:58 | #454 | Link | ||
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,752
|
Quote:
Quote:
I’m not sure how you’d define “pretty decent” - I would think it wouldn’t before end of year that AV1 has solutions that clearly outperform even x264 for high-volume scenarios. But yeah, desktop file-to-file conversion will work, and will produce good looking video. Speed is far from practical yet, but getting it to the ballpark of x265 speed at x264 quality is a reasonable 2018 goal. Also, Opus is an audio codec: only one dimension! Audio codec optimization is important, but we’re talking 1-2 orders of magnitude less effort to get to a production-grade encoder than with a video codec. And we actually have good audio quality perceptually correlated metrics, which makes automated tuning and testing much more feasible and useful. |
||
20th February 2018, 20:21 | #456 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,752
|
Quote:
DEFINITELY not mostly. Companies that really wanted to contribute to it so they could use the technology most joined the reasonable MPEG-LA pool. Sent from my iPhone using Tapatalk |
|
20th February 2018, 23:30 | #457 | Link | |
Registered User
Join Date: Apr 2004
Posts: 1,315
|
Quote:
Both have flaws. Final quality verification tests of absolutely every meaningful audio standard were made by testing ... on humans. Last edited by IgorC; 20th February 2018 at 23:34. |
|
23rd February 2018, 23:43 | #458 | Link |
Registered User
Join Date: Apr 2016
Posts: 61
|
Experimental AV1 encoder in Rust: https://github.com/xiph/rav1e
I'll be adding new data to my comparator with a AV1 snapshot from 20180222, next week or so. I've changed the encoder parameters for more speed so I need to recalculate the old snapshots to have comparable data. Also I added the PIK image format. I also plan to do an actual video comparaison based on 30 short clips, VMAF metrics, AV1, X264, X265, and VP9. Gonna take a long while as I haven't written a single line of code yet and the encoding itself will be long. Maybe AV1 will be bitstream freezed then. Latest estimate are: "AOM: Bitstream maybe March, maybe announce at NAB (early April)" (https://www.nabshow.com/ from April 7th to April 12th) Last edited by Clare; 23rd February 2018 at 23:47. |
24th February 2018, 00:33 | #459 | Link |
Registered User
Join Date: Mar 2004
Posts: 1,120
|
I hope you do comparisons at many resolutions e.g: 360p, 480p, 576p, 720p, 1080p, 1440p, uhd. A lot of the benchmarks shown so far comparing av1 to x264, x265 and vp9 either just show the overall difference or they show 360p, 720p and 1080p. I am more interested in the improvement of 360p-720p as x265 doesn't have that much improvement over x264 at those resolutions.
|
24th February 2018, 01:36 | #460 | Link | |
Registered User
Join Date: Apr 2016
Posts: 61
|
Quote:
It's a mix of 360p, 720p and 1080p. I don't have any 1440p or UHD content and I doubt my computer would be able to process it in a reasonable time. But I plan to release the Python scripts I use on Github so it will be usable on any dataset, like I did for images (https://github.com/WyohKnott/image-comparison-sources). Or I need to buy a Threadripper… when I have lots of money and no taxes to pay. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|