View Single Post
Old 23rd March 2009, 14:30   #7  |  Link
nm
Registered User
 
Join Date: Mar 2005
Location: Finland
Posts: 2,641
Quote:
Originally Posted by qyot27 View Post
The command-line used:
Code:
x264 --crf 18.0 --ref 16 --mixed-refs --no-fast-pskip --bframes 16 --b-adapt 2 --b-pyramid --weightb --direct auto --deblock 1:1 --subme 9 --trellis 2 --partitions all --8x8dct --scenecut 100
--threads auto --thread-input --sar 1:1 --aud --progress --no-dct-decimate --no-psnr --no-ssim --output "output.mp4" "input.i420" 1280x720 --fps 59.94
[...]

I noticed that the responsiveness got worse as the %CPU value dropped. It would start out high, and as encoding progressed, it would drop, and eventually plateau around the 3-6% range.
You are running completely out of memory with that command-line. With 16 reference frames, b-adapt 2, 16 b-frames, and a 720p source, x264 needs over 420 MBs of resident memory. This leads to heavy swapping on a system with 256 MBs of RAM and you might run out of swap space too, which would mean that the encoding process gets terminated by the kernel. I tried both a 32-bit Windows build and a 64-bit Linux build of x264 and both had approximately the same memory usage.

The way how other running programs behave under these kind of extreme situations is determined by the process CPU and I/O priorities and how much the programs access disk and use RAM. I don't see how Windows would work better when a process runs out of memory and the system starts swapping.

To get decent performance out of x264, swapping must be avoided anyway. By using less references and b-frames (when using b-adapt 2), memory consumption is significantly lower (~100 MBs with --ref 5 --bframes 3), encoding is much faster and you don't really sacrifice any significant amount of quality.

Last edited by nm; 23rd March 2009 at 14:33.
nm is offline   Reply With Quote