View Single Post
Old 1st December 2019, 15:30   #4  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by stax76 View Post
I'm working on an app (staxrip) that uses a video library (avisynth/vapoursynth) and GDI for video rendering.

In its current state it's inefficient because it's not multi-threaded and not hardware accelerated, in particular 4K/UHD is painful slow.

Before trying to render it with opengl or directx I want to try to multi-thread it and see if it's good enough.

Playback is not required, only showing frames when a trackbar/slider is moved.

Currently, it works like so:

1. track bar is moved by user
2. native video library is queried for a frame, can be slow for large videos
3. native frame is converted to a WinForms bitmap
4. Bitmap is rendered with GDI (System.Drawing)

All this happens in the same thread and I believe it would be much faster with multi-threading.

I think my threading knowledge is not good enough for this task, which .NET threading techniques can I try? Or should I maybe forget about GDI and try opengl or directx, I don't know much about that either but I can learn it.
If this was for actual video playback, the whole process could be parallelized (pipelined) as follows:

Have one "input" thread that continuously queries the next frame from the native video library, and puts these frames into a FIFO queue (input queue) as they arrive. Have one "converter" thread, that continuously takes the next frame from the input queue (if one is available), converts it to a WinForms bitmap and puts the bitmap into another FIFO queue (output queue). Finally, have one "presenter" thread that continuously takes the next bitmap from the output queue (if one is available) and then renders it. Of course, each queue needs to have a max size limit. Also, the "presenter" thread must not render the frames at a faster rate than the intended frame rate, even if several frames are "ready" in the output queue.

But: In the specific scenario that you describe, where only one frame is to be rendered, after each user interaction (track bar is moved by user), and then we wait for the next user interaction, I don't see much potential for parallelization! That is because each step in your list requires the previous step to be completed first; and the first step cannot be started until the next user interaction. So there probably is no way for pipelining the process here... If at all, you might use a "background" thread that, after each user interaction, queries several frame in advance from the native video library (maybe already converts them to Bitmaps too) and puts these frames into some kind of "look-ahead" cache. Then, hoping that the user will only seek ahead by a few frames, you might be able to present the next frame directly from the "look-ahead" cache. But I'm not sure whether this would just duplicate the caching that already exists inside of the native video library.
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 1st December 2019 at 15:47.
LoRd_MuldeR is offline   Reply With Quote