Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)

Reply
 
Thread Tools Search this Thread Display Modes
Old Yesterday, 06:17   #9641  |  Link
Z2697
Registered User
 
Join Date: Aug 2024
Posts: 226
Quote:
Originally Posted by benwaggoner View Post
Yeah, if only it worked! I've never been able to get a single frame to duplicate as documented, even with some pretty extreme settings. I've not tried in a 4.x build, but don't recall any patches for that part.

Maybe it only works in 2-pass or something?
I'm not sure "which" duplication you are referring to, the first case is the encoding side:

What it does is actually removing frames and using SEI to tell decoder to duplicate existing frames, which actually should be called de-duplication I think.

Maybe you are not looking at the right place (i.e. you think it's duplicating frames while it actually deletes frames), but assuming you are not able to get frames to trigger the de-duplication during encoding:

There's PSNR thresholding, can be configured via --dup-threshold parameter.
If you use low enough threshold, eventually some frames will be de-duped, but of course this should only be used in experiments, low threshold will just destroy the video.

The second case: the decoding side...

You need a decoder that's able to recognize and utilize the picture timing SEI (which only tells decoder to double or triple the frame, no actual timestamp is stored) and I guess things will... "just work"... yeah, who knows, the most common decoder (avcodec) doesn't support it so I can't test.


Quote:
Originally Posted by benwaggoner View Post
The tool is reasonably useful with the proper tools; there just aren't any good open source ones currently. The hardest part is the degrain-and-parameterize before encoding.

The rendering has some challenges with repeated patterns, due to the "drunken walk" nature of selecting a random 32x32 block out of a 64x64 block: pixels near the middle are much more heavily sampled than those near the edge. I so wish computer science degrees required some basic statistics! I work with a lot of engineers who are whizzes at linear algebra, but go blank when I ask "is that statistically significant?"
Maybe I just hate noises in general
But I agree, it (FGS) has potential... just it still has a long way to go.

Last edited by Z2697; Yesterday at 06:23.
Z2697 is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 12:57.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.