Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
Thread Tools | Search this Thread | Display Modes |
25th January 2022, 05:41 | #82 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,277
|
yes, I'm using VSGAN 1.6.3 (using NVIDIA Game-reay driver version 511.23), after a system restart I can do a few more seeks inside the source, but after a bit vram stays at max until preview crashs. :/
|
25th January 2022, 10:46 | #83 | Link | |
Registered User
Join Date: Jul 2019
Posts: 73
|
Quote:
Currently, half accuracy for EGVSR does not work correctly, even with fixes on master atm. It's also somewhat VRAM intensive due to it essentially storing and running multiple frames (6 in total by default) at a time. One mistake people are doing is letting VS use the default multi-threading with EGVSR when it should be disabled with `core.num_threads = 1`. Once you do this, it will only run the model on the current frame + n(interval) next frames at a time, instead of e.g. 72 frames with a num_threads of 12. The fixes I'm speaking of right now are in the GitHub repo, but not in a version yet. You could install it straight from the GitHub master if you want to give it a quick test. I'm still working on trying to get half-accuracy properly working for EGVSR, and still trying to work on methods to reduce VRAM but sadly it's just not going all that well. It might simply just take a lot of VRAM considering the number of frames the network processes at once. And as for overlap, yes, it's not implemented in EGVSR at the moment, but perhaps that's something we could try one day to lower VRAM requirements. |
|
25th January 2022, 15:36 | #84 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,277
|
sadly using 'core.num_threads = 1' doesn't help here.
Using: Code:
# Imports import os import sys import vapoursynth as vs # getting Vapoursynth core core = vs.core # Limit thread count to 1 core.num_threads = 1 # Import scripts folder scriptPath = 'I:/Hybrid/64bit/vsscripts' sys.path.insert(0, os.path.abspath(scriptPath)) # Loading Plugins core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll") core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/FFMS2/ffms2.dll") # Import scripts import mvsfunc # source: 'G:\TestClips&Co\files\test.avi' # current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive # Loading source using FFMS2 clip = core.ffms2.Source(source="G:/TestClips&Co/files/test.avi",cachefile="E:/Temp/avi_6c441f37d9750b62d59f16ecdbd59393_853323747.ffindex",format=vs.YUV420P8,alpha=False) # making sure input color matrix is set as 470bg clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited") # making sure frame rate is set to 25 clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1) # Setting color range to TV (limited) range. clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1) # adjusting color space from YUV420P8 to RGB24 for vsVSGAN clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited") # resizing using VSGAN from vsgan import EGVSR vsgan = EGVSR(clip=clip,device="cuda") model = "I:/Hybrid/64bit/vsgan_models/4x_iter420000_EGVSR.pth" # using model parameters from 4x_iter420000_EGVSR.defaults vsgan.load(model, nb=10, degradation="BD", out_nc=3, nf=64) vsgan.apply() # 2560x1408 clip = vsgan.clip # adjusting resizing clip = core.fmtc.resample(clip=clip, w=1920, h=1056, kernel="lanczos", interlaced=False, interlacedd=False) # adjusting output color from: RGB48 to YUV420P8 for x264Model clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited") # set output frame rate to 25.000fps clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1) # Output clip.set_output() Code:
CUDA out of memory. Tried to allocate 56.00 MiB (GPU 0; 8.00 GiB total capacity; 6.62 GiB already allocated; 0 bytes free; 6.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Similar situation when using "core.num_threads = 1" with: Code:
vsgan = ESRGAN(clip=clip,device="cuda") model = "I:/Hybrid/64bit/vsgan_models/4x_BSRGAN.pth" vsgan.load(model) vsgan.apply(overlap=16) # 2560x1408 Old version 1.5 version worked fine with the same source and model. trying to go back to 1.5.0 with Code:
I:\Hybrid\64bit\Vapoursynth>python -m pip install --user --force VSGAN==1.5.0 Code:
Collecting VSGAN==1.5.0 Using cached vsgan-1.5.0-py3-none-any.whl (11 kB) Collecting numpy<2.0.0,>=1.19.5 Using cached numpy-1.22.1-cp39-cp39-win_amd64.whl (14.7 MB) Installing collected packages: numpy, VSGAN ERROR: Exception: Traceback (most recent call last): File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\cli\base_command.py", line 164, in exc_logging_wrapper status = run_func(*args) File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\cli\req_command.py", line 205, in wrapper return func(self, options, args) File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\commands\install.py", line 404, in run installed = install_given_reqs( File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\req\__init__.py", line 73, in install_given_reqs requirement.install( File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\req\req_install.py", line 765, in install scheme = get_scheme( File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\locations\__init__.py", line 208, in get_scheme old = _distutils.get_scheme( File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\locations\_distutils.py", line 130, in get_scheme scheme = distutils_scheme(dist_name, user, home, root, isolated, prefix) File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\locations\_distutils.py", line 69, in distutils_scheme i.finalize_options() File "distutils\command\install.py", line 274, in finalize_options File "distutils\command\install.py", line 437, in finalize_other distutils.errors.DistutilsPlatformError: User base directory is not specified Okay,scratch that, I made a backup of my Vapoursynth folder before updating to 1.6.3, using 1.5.0 memory usage with version 1.5.0 and: Code:
# Imports import os import sys import vapoursynth as vs # getting Vapoursynth core core = vs.core # Import scripts folder scriptPath = 'I:/Hybrid/64bit/vsscripts' sys.path.insert(0, os.path.abspath(scriptPath)) # Loading Plugins core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll") core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/FFMS2/ffms2.dll") # Import scripts import mvsfunc # source: 'G:\TestClips&Co\files\test.avi' # current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive # Loading source using FFMS2 clip = core.ffms2.Source(source="G:/TestClips&Co/files/test.avi",cachefile="E:/Temp/avi_6c441f37d9750b62d59f16ecdbd59393_853323747.ffindex",format=vs.YUV420P8,alpha=False) # making sure input color matrix is set as 470bg clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited") # making sure frame rate is set to 25 clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1) # Setting color range to TV (limited) range. clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1) from vsgan import VSGAN # adjusting color space from YUV420P8 to RGB24 for vsVSGAN clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited") # resizing using VSGAN vsgan = VSGAN(clip=clip,device="cuda") model = "I:/Hybrid/64bit/vsgan_models/4x_BSRGAN.pth" vsgan.load_model(model) vsgan.run(overlap=16) # 2560x1408 clip = vsgan.clip # adjusting resizing clip = core.fmtc.resample(clip=clip, w=1920, h=1056, kernel="lanczos", interlaced=False, interlacedd=False) # adjusting output color from: RGB48 to YUV420P8 for x264Model clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited") # set output frame rate to 25.000fps clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1) # Output clip.set_output() Cu Selur Last edited by Selur; 25th January 2022 at 15:55. |
25th January 2022, 20:47 | #85 | Link | |
Registered User
Join Date: Jul 2019
Posts: 73
|
Quote:
Have you tried using vs-pipe instead of through vs-edit just to see what performance you actually get from as minimal going on as possible? (`vspipe script.vpy . -p`). Perhaps try close all VS-edit instances, check the GPU usage has gone down (it will still have a bit loaded by pytorch from some reason), and then reopen and try? Sometimes randomly seeking and clicking play a few times just uses too much VRAM before it clears, hence crashes. But if you seek around without clicking play more than a few times it shouldnt give you troubles. Clicking F5/Preview after a fair while also gives troubles, though these are vs-edit issues to do with queue caches or something. |
|
25th January 2022, 22:04 | #86 | Link |
Registered User
Join Date: Jul 2019
Posts: 73
|
Update: v1.6.4 fixes a memory leak that happens because I have no fucking idea, but alas, if anyone had any VRAM issues please try on v1.6.4.
Some other changes are there as well, mainly changes to how half=True param works. Spoiler: It doesnt exist, but dont worry see changelog. |
14th February 2022, 23:51 | #88 | Link |
Registered User
Join Date: Sep 2008
Posts: 365
|
I get insta crashes on 1.6.4, 1.6.0-1.6.3 works for some frames before it crash...
The only "stable" version for me seems to be 1.5.0. Testing with model:RealESRGAN_x2plus.pth Is there any debug logs I can activate to figure out the issue? Hardware: Nvidia RTX3090 (tried both 4xx and 5xx series drivers) AMD 3900x 32GB Ram
__________________
(i have a tendency to drunk post) |
15th February 2022, 11:40 | #90 | Link | |
Registered User
Join Date: Sep 2008
Posts: 365
|
Quote:
Code:
chroma = video.resize.Spline36(video.width*2,video.height*2) video = video.fmtc.resample (css="444") video = video.fmtc.matrix (mat="709", col_fam=vs.RGB) vsgan = VSGAN(video, device="cuda") vsgan.load_model(r"RealESRGAN_x2plus.pth") vsgan.run() video = vsgan.clip video = video.fmtc.matrix (mat="709", col_fam=vs.YUV, bits=16) video = video.fmtc.resample (css="420") video = video.fmtc.bitdepth (bits=8) video= core.std.Merge(clipa=video, clipb=chroma, weight=[0, 1]) video.set_output()
__________________
(i have a tendency to drunk post) |
|
15th February 2022, 12:02 | #91 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,277
|
Yeah, that will not work with current VSGAN as the syntax changed.
Instead of: Code:
vsgan = VSGAN(video, device="cuda") vsgan.load_model(r"RealESRGAN_x2plus.pth") vsgan.run() video = vsgan.clip Code:
vsgan = ESRGAN(clip=video ,device="cuda") vsgan.load(r"RealESRGAN_x2plus.pth") # load() instead of load_model() vsgan.apply() # <- not run() video = vsgan.clip Cu Selur |
19th February 2022, 14:28 | #92 | Link | |
Registered User
Join Date: Sep 2008
Posts: 365
|
Quote:
__________________
(i have a tendency to drunk post) |
|
21st April 2022, 18:08 | #93 | Link |
Registered User
Join Date: Sep 2008
Posts: 365
|
Is it normal for Linux to be almost 2x faster than Windows when using the exact same vapoursynth script with VSGAN?
Windows: 3.98fps Linux: 7.67fps Or did I do something wrong with the enviroment setup for pytorch/vapoursynth/vsgan?
__________________
(i have a tendency to drunk post) |
22nd April 2022, 15:59 | #95 | Link |
Registered User
Join Date: Oct 2018
Posts: 7
|
Hi.
Im using a 1060 6GB and only getting around 0.2 fps with "2x_VHS-upscale-and-denoise_Film_477000_G.pth" from https://upscale.wiki/wiki/Model_Database Also getting this warning in cmd "vsgan\utilities.py:36: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_new.cpp:998.) torch.frombuffer(" Is that speed correct with 1060 6GB? When looking at task manager the gpu jumps to 20% every 2-3 seconds, but idle between. See that my dedicated memory is almost full. Is that the reason for it being that slow? Do i need another card with more vram? I have a threadripper 1950x, but no way to use that instead of GPU? With "RealESRGAN_x2plus.pth" I'm getting 0.5 fps Last edited by knumag; 22nd April 2022 at 16:29. Reason: new information |
22nd April 2022, 17:38 | #96 | Link | |
Registered User
Join Date: Sep 2008
Posts: 365
|
Quote:
I can see a compute mode with the CLI though, it's currently configured to Default: nvidia-smi -q | Select-String -Pattern "compute" Compute Mode : Default
__________________
(i have a tendency to drunk post) |
|
22nd April 2022, 17:55 | #98 | Link |
Registered User
Join Date: Oct 2018
Posts: 7
|
Pal SD from vhs after qtgmc.
But should be something wrong somewhere, I'm just getting black video as ouput... Code:
import vapoursynth as vs core = vs.core core.num_threads = 8 core.max_cache_size = 6000 video = core.lsmas.LWLibavSource(source=r"1.mp4") from vsgan import ESRGAN video = core.fmtc.resample (clip=video, css="444") video = core.fmtc.matrix (clip=video, mat="709", col_fam=vs.RGB) vsgan = ESRGAN(clip=video ,device="cuda") vsgan.load(r"RealESRGAN_x2plus.pth") vsgan.apply() video = vsgan.clip video = core.fmtc.matrix (clip=video, mat="709", col_fam=vs.YUV, bits=16) video = core.fmtc.resample (clip=video, css="420") video = core.fmtc.bitdepth (clip=video, bits=8) video = core.resize.Spline36(video, 1440, 1080) video.set_output() Code:
import vapoursynth as vs core = vs.core core.num_threads = 8 core.max_cache_size = 6000 video = core.lsmas.LWLibavSource(source=r"1.mp4") from vsgan import ESRGAN video = core.resize.Bicubic(clip=video, format=vs.RGB24, matrix_in_s="709", range_s="limited") vsgan = ESRGAN(clip=video ,device="cuda") vsgan.load(r"RealESRGAN_x2plus.pth") vsgan.apply() video = vsgan.clip video = core.fmtc.matrix (clip=video, mat="709", col_fam=vs.YUV, bits=16) video = core.fmtc.resample (clip=video, css="420") video = core.fmtc.bitdepth (clip=video, bits=8) video = core.resize.Spline36(video, 1440, 1080) video.set_output() Last edited by knumag; 22nd April 2022 at 18:28. |
22nd April 2022, 19:36 | #99 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,277
|
Script seems fine to me.
(as a side note: using https://github.com/HolyWu/vs-realesr...r/vsrealesrgan is nearly 2 times faster than vsgan with RealESRGAN here.) Do you use the same driver version on Linux and Windows? |
25th April 2022, 13:16 | #100 | Link | |
Registered User
Join Date: Oct 2018
Posts: 7
|
Quote:
Followed your guide here: https://forum.selur.net/thread-1858.html But when installing vsdpir and vsrealesrgan, Im getting errors. Code:
Using cached VapourSynth-58.zip (558 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [15 lines of output] Traceback (most recent call last): File "C:\Users\knumag\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 64, in <module> dll_path = query(winreg.HKEY_LOCAL_MACHINE, REGISTRY_PATH, REGISTRY_KEY) File "C:\Users\knumag\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 38, in query reg_key = winreg.OpenKey(hkey, path, 0, winreg.KEY_READ) FileNotFoundError: [WinError 2] The system cannot find the file specified During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "C:\Users\knumag\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 67, in <module> raise OSError("Couldn't detect vapoursynth installation path") OSError: Couldn't detect vapoursynth installation path [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. |
|
Tags |
esrgan, gan, upscale, vapoursynth |
|
|