Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 16th February 2018, 08:03   #21  |  Link
`Orum
Registered User
 
Join Date: Sep 2005
Posts: 178
Quote:
Originally Posted by qyot27 View Post
x264 is simply set up to always output 4:2:0 unless the user overrides it (even if the input wasn't 4:2:0). Normally, bit depth wasn't a part of it, because x264 only gained multi-bit support on December 24th, 2017. Despite now having 8-bit and 10-bit in a single build, multi-bit builds still default to 8bit rather than outputting in the same depth as the input, likely for the same reason you have to override it so it doesn't automatically convert your input to 4:2:0 - hardware compatibility. H.264-capable hardware players are almost always restricted to 8-bit 4:2:0.
What does chroma subsampling have to do with bit depth?

Anyway, my understanding of the "multi-bit" builds was that was only for selectable depth output, and hasn't had any effect on what x264 will take in. As long as I've used it, x264 with lavf or y4m support supports all depths that lavf / y4m support for output, and you can even feed it a full 16 bit video if you like. Whether or not this is a good idea depends on other factors, but I usually try to feed it the same depth I'm outputting to (usually 10 bit). Theoretically 16 bit would be better if it had internal support for it, but I'm unsure without looking at the code base.

The error that appears when I try to directly encode 10-bit output from avs scripts is vexing though: "avs [error]: not supported pixel type: YUV420P10"? I can only assume this is due to its avs demuxer not handling anything higher than 8 bit, while lavf/y4m handle it just fine.
__________________
My filters: DupStep | PointSize

Last edited by `Orum; 16th February 2018 at 08:08.
`Orum is offline   Reply With Quote
Old 16th February 2018, 10:54   #22  |  Link
DJATOM
Registered User
 
DJATOM's Avatar
 
Join Date: Sep 2010
Location: Ukraine, Bohuslav
Posts: 377
10-bit y4m input is actually up-converted to 16-bit internally: http://git.videolan.org/?p=x264.git;...ds/master#l282
On my observations 16->10->16 will produce lesser bitrate (against direct 16 bits) with near the same quality.
__________________
Me on GitHub
PC Specs: Ryzen 5950X, 64 GB RAM, RTX 2070
DJATOM is offline   Reply With Quote
Old 16th February 2018, 13:57   #23  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Quote:
Originally Posted by DJATOM View Post
10-bit y4m input is actually up-converted to 16-bit internally:
Yes, but the 10->16->10 conversion is supposed to be lossless.
sneaker_ger is offline   Reply With Quote
Old 16th February 2018, 16:41   #24  |  Link
qyot27
...?
 
qyot27's Avatar
 
Join Date: Nov 2005
Location: Florida
Posts: 1,416
Quote:
Originally Posted by `Orum View Post
What does chroma subsampling have to do with bit depth?

Anyway, my understanding of the "multi-bit" builds was that was only for selectable depth output, and hasn't had any effect on what x264 will take in. As long as I've used it, x264 with lavf or y4m support supports all depths that lavf / y4m support for output, and you can even feed it a full 16 bit video if you like. Whether or not this is a good idea depends on other factors, but I usually try to feed it the same depth I'm outputting to (usually 10 bit). Theoretically 16 bit would be better if it had internal support for it, but I'm unsure without looking at the code base.

The error that appears when I try to directly encode 10-bit output from avs scripts is vexing though: "avs [error]: not supported pixel type: YUV420P10"? I can only assume this is due to its avs demuxer not handling anything higher than 8 bit, while lavf/y4m handle it just fine.
The point was that since if you give x264 4:4:4, it still by default will convert to 4:2:0 unless you tell it otherwise, the same thing holds true for bit depth. Even if you have a multi-bit build (which yes, means the bit depth of the output), it will output 8-bit if you don't tell it otherwise.

The avs demuxer was only updated to accept the 16-bit high bit depth pix_fmts from AviSynth+. It'll downsample to 10-bit if you tell it to output 10-bit.
qyot27 is offline   Reply With Quote
Old 16th February 2018, 19:32   #25  |  Link
`Orum
Registered User
 
Join Date: Sep 2005
Posts: 178
Quote:
Originally Posted by DJATOM View Post
10-bit y4m input is actually up-converted to 16-bit internally: http://git.videolan.org/?p=x264.git;...ds/master#l282
On my observations 16->10->16 will produce lesser bitrate (against direct 16 bits) with near the same quality.
Interesting. However, it's a good example of where theory and practice can diverge, as upconverting alone doesn't handle some of the problems that can occur later in the process, such as lacking dithering.

Quote:
Originally Posted by qyot27 View Post
The point was that since if you give x264 4:4:4, it still by default will convert to 4:2:0 unless you tell it otherwise, the same thing holds true for bit depth. Even if you have a multi-bit build (which yes, means the bit depth of the output), it will output 8-bit if you don't tell it otherwise.
Good to know, though I'm still using the strictly 10-bit builds so I haven't run into this issue yet. Also the problems I have now with doing 4:4:4 are all on the capture-side of things, but I'll keep that in mind if the problems there ever get fixed.

Quote:
Originally Posted by qyot27 View Post
The avs demuxer was only updated to accept the 16-bit high bit depth pix_fmts from AviSynth+. It'll downsample to 10-bit if you tell it to output 10-bit.
Ah, I didn't realize it handled 16-bit but not 10. Assuming DJATOM is correct and handing it 10-bit sources is better than 16, I don't see any harm in doing this (assuming src is a 16 bit clip):
Code:
src.ConvertBits(10, dither=0).ConvertBits(16)
...which would let me avoid the awkwardness of using avs2yuv64. Of course, you could use other methods of hacking off the last 6 bits too, like f3kdb(). Now if only I could do the same for x265, though I'll have to try the patched builds first to see if any support avs directly.
__________________
My filters: DupStep | PointSize

Last edited by `Orum; 16th February 2018 at 19:38.
`Orum is offline   Reply With Quote
Old 17th February 2018, 01:21   #26  |  Link
qyot27
...?
 
qyot27's Avatar
 
Join Date: Nov 2005
Location: Florida
Posts: 1,416
Quote:
Originally Posted by `Orum View Post
...which would let me avoid the awkwardness of using avs2yuv64. Of course, you could use other methods of hacking off the last 6 bits too, like f3kdb(). Now if only I could do the same for x265, though I'll have to try the patched builds first to see if any support avs directly.
x265 (with the LAVF patch) acts more or less exactly as x264 did in this case too.

10-bit avs script -> not using the -D option -> 8-bit output
10-bit avs script -> -D 10 -> 10-bit output
and it's detected correctly when you look at the [lavf] info line.

When you give it a 12-bit script, though, things go sideways. The script is incorrectly detected as 8-bit, and --input-depth does nothing to force it to behave (the output is bugged as well). That's going to have to be fixed in the LAVF patch, since it's completely isolated to that patch; libavformat itself is totally fine with 12-bit input from AviSynth+, and will pass it to libx265 correctly as 12-bit when it happens inside FFmpeg.
qyot27 is offline   Reply With Quote
Reply

Tags
avs2yuv, avs2yuv64, x265

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:08.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.