The so-called "GPU decoding" almost always does not actually run on the GPU, but uses a dedicated hardware H.264 decoder (i.e. a separate piece of silicon) that just happens to be integrated with the GPU. This has the important consequence that you do not need to write your own GPU shader/kernel code for H.264 decoding (e.g. via CUDA or OpenCL), since the "programmable" part of the GPU is not even used. Instead, you just use the "hardwired" H.264 decoding routines that already are burnt into the silicon. And you can use the hardware H.264 decoder via standard programming interfaces, such as DXVA, CUVID or VDPAU. So it's the DXVA, CUVID or VDPAU SDK that you need to look into for code samples, I suppose...
Last edited by LoRd_MuldeR; 7th May 2015 at 21:19.
|