Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
17th May 2020, 05:05 | #1361 | Link |
Registered User
Join Date: Apr 2020
Posts: 11
|
hmmmmmmm
I had a earlier post that referenced the problems I was having with Minusthebear's script but I think It went to a moderator. It was automatically putting the FPS at 30 no matter what I changed. hopefully that post will show soon.
|
19th May 2020, 01:12 | #1362 | Link |
Registered User
Join Date: Apr 2020
Posts: 11
|
found the 30FPS
I found why the 30 FPS was hard wired at the very bottom of the script.
interpolated= SVSmoothFps(autolevels, super, vectors, "{rate:{abs:true, num:30, den:1}}", mt=threads).sharpen(last_sharp,mmx=false).sharpen(last_sharp,mmx=false).blur(last_blur,mmx=true) Changed it to 16 and it was better, still something else going on with this to change how many frames I input. must be part of this line I'm still trying to figure it out. I'm still not sure about the super in my previous post. I was hoping this would work faster using the GPU for some tasks. |
2nd June 2020, 04:54 | #1363 | Link | |
Registered User
Join Date: Sep 2005
Posts: 178
|
Quote:
Is there another option for UnsharpMask() with 64 bit and HBD support? I'm guessing it's due to the blur filter, so maybe just replacing that would be enough. Edit: Looks like you can just swap out the original blur with aBlur from AWarpSharp2. Here's the modified script, which doesn't have the artifacts from the original, but also will no longer work with RGB: Code:
function UnsharpMask(clip clip, int "strength", int "passes", bool "highq", int "threshold") { strength = default(strength, 64) passes = default(passes, 3) highq = default(highq, true) threshold = default(threshold, 8) blurclip = aBlur(clip, passes, highq ? 1 : 0) e = "x y - abs " + String(threshold) + " scaleb > x y - " + String(strength/128.0)+ " * x + x ?" Expr(clip, blurclip, e, "", "") # U and V are copied } Last edited by `Orum; 2nd June 2020 at 05:37. |
|
4th July 2020, 11:55 | #1364 | Link |
Registered User
Join Date: Jan 2012
Posts: 2
|
Virtualdub automation
I use avisynth to restore 8mm films scanned frame by frame with the classic steps, creating the film from jpeg images, deshake, cleaning, denoise, sharpen and color adjustment.
To automate the process I created a set of batch scripts. The processing is carried out step by step with an intermediate lossless codec, par exemple UTvideo. So the first steps do not need to be repeated if you change for example the color adjustment script. Obviously you have to have a lot of disk space ! Each step processes the result of the previous step, for example with these scripts to process the film 01: do join images 01 do concat join 01 do deshake concat 01 do clean deshake 01 do adjust clean 01 do adjust_gammac clean 01 do compare adjust adjust_gammac 01 do render264 adjust 01 Note: - Avisynth and all plugins are in 64bits MT - For the deshake, I preferred the virtualdub deshake plugin - You can change the avisynth script for a step and use your favorite plugins or add other steps. You can find these scripts in my github project https://github.com/dgalland/Restore |
14th September 2020, 05:48 | #1365 | Link |
Registered User
Join Date: Jul 2011
Posts: 8
|
Separating parts of JohnMeyers 2017 script
Hi, I have tried JohnMeyers 2017 script which can be found at https://forum.doom9.org/showthread.php?t=144271&page=55 about 2/3 the way down.
This worked very well with my 8mm film digitized by a wolverine. It was the first time I had seen sharpening work in my setup without any drawbacks, and the colour correction was very good. I'm wanting to use it a bit differently though, I don't want any change to the resolution nor any frame rate change. So what I'm intending to do is crop and stabilise with virtualdub & deshaker (with it filling in the black gaps created by stabilisation), then use all the other parts of JohnMeyers scripts. I've been able to separate the autolevels and gamma sections, but separating the denoise and sharpening seems a big challenge as they seem to interact with the stabilisation. I'm hoping someone has already done this or something like it, any ideas? I'm thinking of looking at dgalland's scripts (above) but I'm trying to get closer to a solution rather keep opening up new options. |
14th September 2020, 16:45 | #1366 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
Quote:
I've cut out sections many times, and usually all you have to do is assign the output variable from that section to the input variable for that section. This eliminates that section. So, for instance, if you want to remove the stabilization section because you've already done that in Deshaker, take the input variable, "cropped_source", and assign that to the output variable, "stab". So, you delete the stabilization section and replace it with this: stab = cropped_source Last edited by johnmeyer; 20th September 2020 at 07:32. Reason: changed input variable to cropped_source |
|
15th November 2020, 10:15 | #1367 | Link | |
Registered User
Join Date: Nov 2018
Posts: 14
|
Quote:
I'm waiting for this, thank you |
|
2nd December 2020, 18:02 | #1368 | Link |
Registered User
Join Date: Jul 2004
Posts: 98
|
I'm trying to piece together a stabilize only script, and I'm using the one in the original post here.
However, I can't seem to get it to add black bars to the final output and looking at the script it doesn't look like the parameters are even used. Does anyone have a stabilize only script they can post? I'm searched through this thread but can't seem to find specifically what I'm looking for. Thanks |
2nd December 2020, 23:00 | #1369 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
In my version of the script, the stabilization is done with these three lines:
Code:
stab_reference= cropped_source.crop(est_left,est_top,-est_right,-est_bottom).tweak(cont=est_cont).MT_binarize(threshold=80).greyscale().invert() mdata=DePanEstimate(stab_reference,trust=1.0,dxmax=maxstabH,dymax=maxstabV) stab=DePanStabilize(cropped_source,data=mdata,cutoff=0.5,dxmax=maxstabH,dymax=maxstabV,method=1,mirror=15) These are my default stabalizing parameters: Code:
#STABILISING PARAMETERS #---------------------------------------------------------------------------------------------------------------------------- maxstabH=10 #maximum values for the stabiliser (in pixels) 20 is a good start value maxstabV=10 est_left=40 est_top=40 est_right=40 est_bottom=40 #crop and contast values for special Estimate clip est_cont=1.1 #Too large a value defeats stabilization |
3rd December 2020, 03:05 | #1370 | Link |
Registered User
Join Date: Jul 2004
Posts: 98
|
Thanks John
Been playing around with it and it gives decent results. Although I've noticed what almost look like dropped frames in some cases. And it's not even in cases where there is a lot of camera movement, but it almost looks like its dropping frames. Is this normal? |
3rd December 2020, 04:40 | #1371 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
I've done a LOT of work with various motion stabilization packages (I wrote a ten-page guide to Deshaker fifteen years ago which used to be all over the video sites, but has become harder to find). I've used Deshaker a lot; I own Mercalli; I obviously use DepanStabilize; I got started using Motionperfect; and I've trialed Twixtor.
None of them drop frames, nor do they ever give the appearance of frames being dropped. You would need to post your entire script in order to figure out what is going on. In addition, it is possible that you did some incorrect decimation further upstream, before putting the video into the stabilization script. If you put the shaky video into your NLE, and then put the stabilized version below it, so you can A/B between them, are they the same length and, when you find a frame where you think a skip has happened, does the non-stabilized version show a frame that didn't make it to your final version? |
3rd December 2020, 12:09 | #1372 | Link | |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
Quote:
copy of the translation back to english, you have it somewhere [and will be on-site somewhere]. EDIT: Here tis:- https://forum.doom9.org/showthread.p...ch#post1854682 Yeh, perhaps [ maybe you get a Sticky - then easy for you to find it ]
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? Last edited by StainlessS; 3rd December 2020 at 12:27. |
|
3rd December 2020, 14:04 | #1373 | Link | |
Registered User
Join Date: Dec 2004
Location: Terneuzen, Zeeland, the Netherlands, Europe, Earth, Milky Way,Universe
Posts: 689
|
Quote:
https://forum.doom9.org/showthread.php?t=175669 Fred.
__________________
About 8mm film: http://www.super-8.be Film Transfer Tutorial and example clips: https://www.youtube.com/watch?v=W4QBsWXKuV8 More Example clips: http://www.vimeo.com/user678523/videos/sort:newest |
|
3rd December 2020, 15:51 | #1374 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
If frubsen wants to use Deshaker, here is a link to my old Deshaker guide:
John Meyer Guide to using Deshaker Last edited by johnmeyer; 3rd December 2020 at 17:08. |
24th March 2021, 09:30 | #1376 | Link |
Registered User
Join Date: Mar 2021
Posts: 3
|
looking for latest plugings
Hi All,
I have been reading some of this thread and replaying some of the scripts. First I started with Fred's package "Film_Restoring_vs_06_2012.zip" and as certain plugins did not work, I tried the "Restore" scripts from D Galland (last post a few messages up in this thread). Also there I encountered issues with plugins. Basic question: how can I make sure to have the right plugins available on my system? Any help is much appreciated getting a 8mm restore flow up and running. Thanks in advance |
30th April 2021, 21:16 | #1377 | Link |
Registered User
Join Date: Apr 2021
Posts: 127
|
LED driven Pseudo-synced transfer
Hello Users, Masters,
This post is long, I apologize. Jump to "THE IDEA" to get it brief. I hope this is the good place. My goal (and I can't wait), is to admire the magic of the Fred's/John's scripts on my images. But first, I have something to solve. THANKS I'm new to this forum and here specially to learn and use Avisynth, Fred's and/or John's scripts. The work they have done creatind, documenting, explaining and helping users is beautyfull. As well is beautyfull the work of all of those who wrote or adapted plugins and tools. The atmosphere on this thread is also very collaborative and constructive. All along il, more than 10 years, no disputes, gentleman arrangements and elegant solutions to disagreements are found. This is rare. Now, it's my turn to share, if this system appears to be efficient and good. HISTORY I spent/enjoyed hours (days, actually) of posts and discussions reading to understand the logic of the process. My brain is producing smoke actually and I am still very confused though having some programming skills. I am used to Linux and havn't used windows since WinXP. So, there is work until I can uses these so powerfull scripts and programs. At the moment, with AVS, I am able to read a video file and apply some basic filters. Installations and use achieved actually easyer on Linux than windows, for me. But I believe all filters/plugins are not (yet?) there for these scripts to be used under Linux. I began filming the image on a screen, dealing with the flikcering as I could with the analogic potentiometer of the unmodified projector. This gave me bad results with mixed frames. Reason for the modifications I'll submit here to happen. MY CONSTRAINTS If I achieve to build a good quality and workflow, I'd like to hire the service of doing 8/S8mm transfers. I need not only very good quality transfers but also a not too slow setup. This is why I have choosen to use a projector instead of a frame by frame system, beside of the accessibility of the system, which is easyer than having to buy a machine vision camera and using it through software (Windows), syncing the whole thing to the projector. THE IDEA As I understood, a first mandatory step to achieve, prior to use the scripts, is to have a frame accurate capture of the footage. The main idea is to use a LED signal on the side of the filmgate as mark for software pseudo-synchronisation with the projector. The LED is turned on while the projector changes from one frame to the next one. (the LED is captured on the side of the framegate) This was inspired from JohnMeyer's system where the software detects the framechange to be able to keep only one video frame for each film frame. I didn't feel confortable with the idea of relying on software to determine which image to throw away. (And was upmost too lazy, as I feared having to tweak a lot to find the right parameters. "will it work on a totally different roll or will I have to re-set all the params?" and so on. Intuitively, this solution also seems to me faster to calculate than calculating if image are mixes. Especially while pans for example. The sound idea to sync is also good but I know nothing about sound electronics... THE SETUP EDIT:Detailed setup + pictures here At the moment, I am shooting with a borrowed Sony FS7 and a Canon MP-e 65mm macro. The projector has been modified to receive a microcontroller which is driving the LED and the PWM for the AC motor speed control. I removed all blades except approx 15% width of the one passing behind the gate while the frame is changing. This rest of blade triggers the microcontroller via an optical endstop. This allows to control the speed +/- precisely and permits to program the moment when the LED has to be lit. MY GRALE So, as first step, I need to find out how to tell avisynth this: "Each time You see the blue light, throw the frame and keep only one of the next ones, until You see the blue light again!" I can say it in english and some other language but not yet in the way we speek in AVSland. MY QUESTIONS Do You think the way I took is good? Have You got a hint, a starting point where I could begin to search for the adapted plugin? (If You really insist, I could tolerate some lines of code fulfilling the goal) By advance thank you for having read till here. If someone is interrested, don't hesitate to ask details/pictures/code/videos/shopping list. While this post is on the forum, I'll prepare a clip so you can see how the capture looks like. Best Regards Last edited by chmars; 16th May 2021 at 22:17. Reason: Add link to setup description |
30th April 2021, 22:42 | #1378 | Link |
Registered User
Join Date: Apr 2021
Posts: 127
|
Here are some seconds of a capture made with the LED indicating the frame change:
https://nmldqjct.preview.infomaniak....mzqSR/download There are still other points to solve: -I am not satisfied with the encoding coming out of the camera but the alternative is only XAVC (from what I found for the moment) which is very heavy. -The focus seemed ok during capture but, probably due to compression, it seems here not to be very sharp. -The image is kind of dancing/wandering. I have to find out why. I presume that the camera is slightly vibrating. This diagonal effect might be due to rolling shutter. Last edited by chmars; 22nd May 2021 at 14:41. Reason: move file do durable server, update link |
30th April 2021, 23:36 | #1380 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
I developed a 16mm capture system that uses a projector from which the shutter has been removed. I developed software which removes the pulldown frames. Since you have an LED to indicate pulldown, you should have an easier time developing the field removal software.
I used a 30 fps interlaced camera, which meant that I had to deal with fields. A better approach would be to use a 60 fps progressive camera. You projector needs to be running at 24 fps or less. Any faster and you cannot guarantee that you will be a pristine frame of video for each frame of film. I strongly suggest that you increase the shutter speed on your camera to something above 1/50 or 1/60 of a second. This will avoid any fuzziness if the camera captures the film just as it is coming to rest in the projector gate. I use 1/1000 of a second, but with your system, probably 1/250 will be just fine. Make sure to turn off auto-focus. Also set white balance to something fixed. I balance mine off the bare bulb, with no film in the gate. Some bouncing of the film is normal. This is called gate weave, and even expensive projectors used in movie theaters (if there are any film projectors left) caused the film to bounce around. You would always see this on the title of the movie, if it was fixed: it would bounce around. |
|
|