Is there any benefit to using CoreAVC w/Cuda support to decode the video for Ripbot vs ffdshow's ffmpeg-mt?
I was just messing around with it last night and when using CoreAVC /Cuda the Video Engine load (as reported by GPU-Z) was around 10%-20% while ripbot was running and the x264 was at 98%-99% cpu usage and avysynth (I think that was the name anyway) was using around 1%-2%.
So by offloading the decoding to the nvidia card via Cuda does that provide any benefit to the encoding speed (assuming the decoding would be using roughly the same 10-20% of cpu instead of GPU - I'm a complete noob and have been looking these things up as I have time)? I did notice that while using ripbot and CoreAVC together Ripbot doesn't update progress and the window simply shows "not responding". I've seen a lot of posts about benchmarks on decoding speed...but not really any using it along with encoding for speed benefit.
I'm doing an encode right now just to see (Prince of Persia blu-ray) and when it's done I'm going to re-do it with the same settings only with CoreAVC.
I just figured I'd ask here to see if anyone had any thoughts if it was worth it or not or if the gains (if any) to be had were negligible.
|