Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
5th September 2021, 10:52 | #41 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,399
|
Also did anyone manage to install this in a portable Vapoursynth environment on Windows?
Calling: Code:
python -m pip install --upgrade vsbasicvsrpp Code:
ERROR: Could not find a version that satisfies the requirement vapoursynth==54 (from versions: 39, 40, 41, 42, 43, 44, 45, 46, 47, 47.1, 47.2, 48, 49, 50, 51) ERROR: No matching distribution found for vapoursynth==54 Code:
python -m pip install --upgrade vsbasicvsrpp Code:
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root. Code:
set CUDA_HOME=I:/Hybrid/64bit/Vapoursynth/Lib/site-packages/torch/cuda and then: Code:
python -m pip install --upgrade vsbasicvsrpp Code:
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\utils\cpp_extension.py:305: UserWarning: Error checking compiler version for cl: [WinError 2] Das System kann die angegebene Datei nicht finden warnings.warn(f'Error checking compiler version for {compiler}: {error}') Code:
python -m pip install mmcv-full==1.3.12 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.htm Cu Selur Last edited by Selur; 5th September 2021 at 11:05. |
5th September 2021, 14:37 | #42 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,423
|
Quote:
Models 3-5 are from the NTIRE 2021 Quality enhancement of heavily compressed videos Challenge , which take HEVC compressed videos using fixed qp and low bitrate encodings - so those pre-trained models should factor in some compression degredation (at least HEVC type, not necessarily MPEG2, or AVC). It' s nice to see some other types of degradation training and models, but 3 and 5 tend to be very smooth (ie. no detail) . 4 has more detail but more artifacts. Models 3-5 don't upscale I haven't done enough testing to see if using a much larger interval size helps or hinders in general. It appears a very small interval size is worse. Larger sizes take more memory and are slower |
|
5th September 2021, 14:51 | #43 | Link | ||
Registered User
Join Date: Sep 2007
Posts: 5,423
|
Quote:
Quote:
|
||
5th September 2021, 16:25 | #45 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,423
|
HolyWu added update a few hours ago and made install "easier" on Windows. Maybe try this new one
https://github.com/HolyWu/vs-basicvsrpp Quote:
|
|
5th November 2021, 20:42 | #47 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,399
|
Has anyone tried https://github.com/HolyWu/vs-swinir ? (didn't want to create a new thread )
-> man this is too slow on my machine to be useful for normal usage on my gpu (Geforce GTX 1070ti) Last edited by Selur; 5th November 2021 at 22:21. |
6th November 2021, 08:11 | #48 | Link | ||
Registered User
Join Date: Aug 2002
Location: Italy
Posts: 309
|
Quote:
https://github.com/HolyWu/vs-swinir Quote:
EDIT Out of curiosity: do you think the new Apple chips (M1 Pro / Max) could speed up operations? Last edited by PatchWorKs; 6th November 2021 at 08:50. |
||
6th November 2021, 13:34 | #49 | Link | ||
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,399
|
Quote:
a. pytorch support b. rewriting of the exitistn plugins -> no Quote:
|
||
7th November 2021, 19:54 | #50 | Link |
Registered User
Join Date: Oct 2001
Posts: 454
|
thanx for testing How much vram did it use in your case? Is it really that GPU demanding or might the slow-down caused by not enough VRAM ? If you have something a unskilled person like me could use and test, I could throw it into a 12GB VRAM card and see what happens...
|
8th November 2021, 09:49 | #51 | Link | ||
Registered User
Join Date: Aug 2002
Location: Italy
Posts: 309
|
Quote:
https://github.com/pytorch/pytorch/issues/47702 Quote:
Thx ! |
||
8th November 2021, 18:32 | #52 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,399
|
here are a few examples:
used: Code:
# Imports import vapoursynth as vs # getting Vapoursynth core core = vs.core # Loading Plugins core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll") core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll") core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll") # source: 'E:\clips\VTS_02_1-Sample-Beginning.demuxed.m2v' # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine # Loading E:\clips\VTS_02_1-Sample-Beginning.demuxed.m2v using D2VSource clip = core.d2v.Source(input="E:/Temp/m2v_5d36292e1f7f53fd6e26be51d50bbf8c_853323747.d2v") # making sure input color matrix is set as 470bg clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited") # making sure frame rate is set to 29.97 clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001) # Setting color range to TV (limited) range. clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1) # Deinterlacing using TIVTC clip = core.tivtc.TFM(clip=clip) clip = core.tivtc.TDecimate(clip=clip)# new fps: 23.976 # make sure content is preceived as frame based clip = core.std.SetFieldBased(clip, 0) # DEBUG: vsTIVTC changed scanorder to: progressive # cropping the video to 704x480 clip = core.std.CropRel(clip=clip, left=8, right=8, top=0, bottom=0) from vsswinir import SwinIR # adjusting color space from YUV420P8 to RGBS for VsSwinIR clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited") # resizing using SwinIR clip = SwinIR(clip=clip, task="real_sr_large", scale=4, tile_x=352, tile_y=240, tile_pad=16, device_type="cuda", device_index=0) # 2816x1920 # adjusting resizing clip = core.fmtc.resample(clip=clip, w=1920, h=1474, kernel="lanczos", interlaced=False, interlacedd=False) # adjusting output color from: RGB48 to YUV420P8 for x264Model clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited") # set output frame rate to 23.976fps clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001) # Output clip.set_output() some more using RealSR_large: Cu Selur Last edited by Selur; 8th November 2021 at 18:50. |
9th November 2021, 08:21 | #53 | Link |
Registered User
Join Date: Aug 2002
Location: Italy
Posts: 309
|
Very nice results (especially on the faces), even if - of course - not yet optimal for everything...
...btw I hope to see a SwinIR version optimized for videos too. Last edited by PatchWorKs; 9th November 2021 at 08:25. |
9th November 2021, 21:08 | #54 | Link | ||
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,399
|
Quote:
Quote:
|
||
10th November 2021, 08:13 | #55 | Link |
Registered User
Join Date: Aug 2002
Location: Italy
Posts: 309
|
Already asked, of course: https://github.com/JingyunLiang/SwinIR/issues/47
Note: I've also just "fed" @HolyWu with this awesome collection, let's see if other interesting "VS-ports" will come out... Last edited by PatchWorKs; 10th November 2021 at 10:34. |
11th November 2021, 11:05 | #56 | Link | |
Registered User
Join Date: Oct 2001
Posts: 454
|
Quote:
for upscaling, algos like esrgan (single image) are not very suitable for real-life content. Too much flickering. So unless SwinIR doesn´t get some extensions for multi-frame usage / flow detection /whatever, one will always get flickering / stutters / inkonsistent movement... |
|
19th November 2021, 20:09 | #58 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,399
|
Has anyone tested https://github.com/HolyWu/vs-hinet ?
Here are a few screen shots: (not sure what to make of them and for what content this is really useful) Mode: Deblur GoPro Mode: Deblur REDS Mode: denoise Mode: derain Last edited by Selur; 19th November 2021 at 21:32. |
20th November 2021, 08:22 | #59 | Link |
Registered User
Join Date: Aug 2002
Location: Italy
Posts: 309
|
According to your tests on that frame, the highest fidelity seems to be achieved by derain model, btw here are some questions:
Last but not least (even if OT): did you tried RIFE ? https://github.com/HolyWu/vs-rife Last edited by PatchWorKs; 20th November 2021 at 16:31. |
20th November 2021, 10:17 | #60 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,399
|
speed: ~2-3fps for sd content, so not that slow
xClean: no clue about xClean, haven't played around with it too many options for my taste (+ would need to add znedi3 and nnedi3cl support to it) upscale: at least the current interface offers no upscaling and the method does not upscale rife: yes, I like it (with sceneChange added). Waiting for FrameRateConverter to properly support it |
Thread Tools | Search this Thread |
Display Modes | |
|
|