View Single Post
Old 27th June 2017, 16:31   #16  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,695
I know the OP wants to use AVISynth, but it is actually a lousy tool for this particular problem. Why? Because you can't set the values simply by interacting with a GUI display of the audio on a timeline. Doing what the OP wants would take me just a few seconds in my NLE (Vegas):

1. Put the video on one track, and the audio on the track below.
2. Sync the audio and video at the beginning of the track.
3. Go to the end of the track, grab the edge of the end of the audio clip, hold the Ctrl key (to force the audio to stretch), and drag the audio until it syncs with the video.

If you already have another track that is synced to the video (i.e., lousy audio, but synced to the video), you can do this all by looking at the waveforms, and it is almost instantaneous.

To get a really good sync, you have to find places where people are talking, in close-up, and find a place where they say something with a "p" or "m" or other syllable where the lips come together. You then sync on these points. You do it at the beginning and the end of the clip, and follow my 1-2-3 instructions above.

I have no idea in the world how you are going to do that by plugging numbers into an AVISynth script, rendering out a result and then trying it out. It is going to take hours instead of seconds.

So, as I've said in other threads on this same subject, AVISynth is not the right tool for this project.

Last edited by johnmeyer; 27th June 2017 at 16:32. Reason: clarity
johnmeyer is offline   Reply With Quote