Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > VapourSynth

Reply
 
Thread Tools Search this Thread Display Modes
Old 31st January 2023, 16:34   #4781  |  Link
~ VEGETA ~
The cult of personality
 
~ VEGETA ~'s Avatar
 
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
Quote:
Originally Posted by _Al_ View Post
Make mask a grayscale clip with only values 0 or 255 (for 8bits). You can use Levels or Expr to make it from your logo.
like converting to yuv then shuffleplanes first plane?

levels needs inputs and outputs 0-255, how to do your idea?
~ VEGETA ~ is offline   Reply With Quote
Old 31st January 2023, 17:14   #4782  |  Link
_Al_
Registered User
 
Join Date: May 2011
Posts: 321
I'd check alpha first, it needs to be 255 to have full transparency to see logo over clip, and zero values to see clip without any transition. In a previewer using color pickers. The borders would be some grayscale for smooth transitions. And it looks like mask has to be specifically used as a keyword, havsfunc.Overlay(clip, logo, mask=alpha)

oh those levels, ..., if alpha values do not reach 255, then using something like, for grayscale clip (plane does not have to be specified):
core.std.Levels(max_in=230, max_out=255) to get it to 255, that number 235 you test in previewer, could be a bit higher or lower

Last edited by _Al_; 31st January 2023 at 17:26.
_Al_ is offline   Reply With Quote
Old 31st January 2023, 17:25   #4783  |  Link
~ VEGETA ~
The cult of personality
 
~ VEGETA ~'s Avatar
 
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
Quote:
Originally Posted by _Al_ View Post
I'd check alpha first, it needs to be 255 to have full transparency to see logo over clip, and zero values to see clip without any transition. In a previewer using color pickers. The borders would be some grayscale for smooth transitions. And it looks like mask has to be specifically used as a keyword, havsfunc.Overlay(clip, logo, mask=alpha)

thanks but i managed to make insertsign work by just using the string of file location as input rather than ffms2.
~ VEGETA ~ is offline   Reply With Quote
Old 7th February 2023, 11:13   #4784  |  Link
~ VEGETA ~
The cult of personality
 
~ VEGETA ~'s Avatar
 
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
I'd like to ask about better ways to sharpen anime rather than using regular sharpeners (like LSFmod). this is regardless of descale or not to descale.

I found AI or NN upscalers with nice results in terms of denoise, deblock, and sharpening but are they used on actual fansub encodes (modern anime)? I am only interested in making the encodes slightly more sharp (no halos) but not too sharp.

Doing one of these upscalers then SSIM downsample to 1080p... then doing maskedmerge using an edge mask to only get sharp edges from the upscaled clip. is this a good approach? I have Ryzen 7900X CPU, 3060TI GPU, 32G DDR5 5600MHz PC so I think it can handle high work load.

what do you think?
~ VEGETA ~ is offline   Reply With Quote
Old 7th February 2023, 20:18   #4785  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
You might also want to look into vsgan models from https://upscale.wiki/wiki/Model_Database, most of them are trained on animes.
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 7th February 2023, 21:20   #4786  |  Link
~ VEGETA ~
The cult of personality
 
~ VEGETA ~'s Avatar
 
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
Quote:
Originally Posted by Selur View Post
You might also want to look into vsgan models from https://upscale.wiki/wiki/Model_Database, most of them are trained on animes.
i will dig into this soon, to learn the syntax of using them in VS.

however, what do you think about the approach I explained above?

also, besides vsgan, what other similar tools which are used in encoding anime real releases not just for testing.

I feel like such tools can do some damage to backgrounds or some unwanted features like denoise and deblock. thus i pointed out the masks.
~ VEGETA ~ is offline   Reply With Quote
Old 8th February 2023, 17:47   #4787  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
Yes, your approach might work too.
Anime release groups usually use tons of masking and rarely use a filter as is. (+ they often filter per scene)
You might also want to check out the 'Enhance Everything!' discord channel (https://discord.com/invite/cpAUpDK) and the 'Irrational Encoding Wizardry' (https://discord.gg/qxTxVJGtst) channel.

Quote:
i will dig into this soon, to learn the syntax of using them in VS.
Here's a simple example:
Code:
# resizing using VSGAN
from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "I:/Hybrid/64bit/vsgan_models/1x_BroadcastToStudioLite_485k.pth"
vsgan.load(model)
vsgan.apply()
clip = vsgan.clip
the VSGAN-site also has some good documentation.

Cu Selur
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 8th February 2023, 18:37   #4788  |  Link
~ VEGETA ~
The cult of personality
 
~ VEGETA ~'s Avatar
 
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
Quote:
Originally Posted by Selur View Post
Yes, your approach might work too.
Anime release groups usually use tons of masking and rarely use a filter as is. (+ they often filter per scene)
You might also want to check out the 'Enhance Everything!' discord channel (https://discord.com/invite/cpAUpDK) and the 'Irrational Encoding Wizardry' (https://discord.gg/qxTxVJGtst) channel.


Here's a simple example:
Code:
# resizing using VSGAN
from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "I:/Hybrid/64bit/vsgan_models/1x_BroadcastToStudioLite_485k.pth"
vsgan.load(model)
vsgan.apply()
clip = vsgan.clip
the VSGAN-site also has some good documentation.

Cu Selur
thanks for this my friend.

what other tools exist for this task rather than vsgan\esrgan? how can we compare their results and see which is best for anime?

I know these stuff do many enhancements but i am only interested in getting lines sharper, the line art itself not background or texture.

so I don't want to do these silly "4k upscaled anime" releases or even do very sharp everything... i think you got what i mean.

I know fansub releases do many masking, me included. However, i didn't see a vs script for a release which has stuff like vsgan in it which made me wonder why it is not used.
~ VEGETA ~ is offline   Reply With Quote
Old 8th February 2023, 19:19   #4789  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
afaik. most release groups use normal filters or script collections like lvsfunc&co and no magic tools.
Why it's not used is probably easy:
a. requires up-to-date hardware, especially if you want to encode tons of content. Encoding with on multiple machines to speed up the processing is hard if each of them require a 3000+ NVIDIA GPU to not totally suck and are not really fast even then.
b. assuming you don't need to do restoration, you can archive the stuff most ai filtering does through other filters when you spend enough effort.
c. you don't have much control aside from different types of masking to control what the ai stuff does (assuming you didn't train the models yourself)
d. folks filtering anime often want to keep artifacts which they perceive as details of the source. (there is also the discussion of which release of anime xy has the colors 'right', often it has to be the one that came out earlier,...)
=> to get to the bottom of things, you probably will need to go to the discord channels of the groups and ask them and some might actually reply honestly and not simply say 'ai bad => you bad' *gig*

Cu Selur
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 8th February 2023, 19:32   #4790  |  Link
~ VEGETA ~
The cult of personality
 
~ VEGETA ~'s Avatar
 
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
I get this error: https://pastebin.com/Kj8QBQDi

I tried doing many tools but could not get the release to be as sharp as another good one. talked to them but didn't give many details how they achieved it. thus I thought of this method.

I have Ryzen 7900X and 3060Ti so I guess my PC qualifies.

I get it that AI upscale is bad for anime releases but i don't plan to use it that way.. I only want the good sharp lines. ALL other stuff are exactly as they were.

Last edited by ~ VEGETA ~; 8th February 2023 at 19:48.
~ VEGETA ~ is offline   Reply With Quote
Old 8th February 2023, 19:46   #4791  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
btw. can someone port EZDenoise to Vapoursynth?
Original:
Code:
function EZdenoise(clip Input, int "thSAD", int "thSADC", int "TR", int "BLKSize", int "Overlap", int "Pel", bool "Chroma", bool "out16")
{
thSAD = default(thSAD, 150)
thSADC = default(thSADC, thSAD)
TR = default(TR, 3)
BLKSize = default(BLKSize, 8)
Overlap = default(Overlap, 4)
Pel = default(Pel, 1)
Chroma = default(Chroma, false)
out16 = default(out16, false)

Super = Input.MSuper(Pel=Pel, Chroma=Chroma)
Multi_Vector = Super.MAnalyse(Multi=true, Delta=TR, BLKSize=BLKSize, Overlap=Overlap, Chroma=Chroma)

Input.MDegrainN(Super, Multi_Vector, TR, thSAD=thSAD, thSAD2=int(float(thSAD*0.9)), thSADC=thSADC, thSADC2=int(float(thSADC*0.9)), out16=out16)
}
Cu Selur
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 8th February 2023, 19:52   #4792  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
@~ VEGETA ~: No clue. First time I see that error. :/
I can send you a link to my current 'torch-AddOn' for Hybrid which basically is a folder with a portable Vapoutsynth, which also includes vsgan (and tons of other stuff).

Cu Selur
__________________
Hybrid here in the forum, homepage

Last edited by Selur; 8th February 2023 at 19:59.
Selur is offline   Reply With Quote
Old 28th February 2023, 07:56   #4793  |  Link
~ VEGETA ~
The cult of personality
 
~ VEGETA ~'s Avatar
 
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
Hello

I am trying to use this https://github.com/YomikoR/GetFnative with base dimensions of 1920x1080p and fractional height of 847.047 (or so, don't remember now).

However, I'd like to use SSIM downsampler instead of regular spline36 if possible. I tried so but could not because it always produces a shifted image.

I tried different manual values but couldn't do it properly as the original code example shown in linked page.

any tips?
~ VEGETA ~ is offline   Reply With Quote
Old 6th March 2023, 22:44   #4794  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,041
Quote:
Originally Posted by Selur View Post
btw. can someone port EZDenoise to Vapoursynth?
Cu Selur
The magic is inside mvtools - not in this simple mvtools usage function. First is good to port to vapoursynth all new features of todays mvtools (post 2.7.45 builds to the end of 2022 or with some new planned features to 2023 like finally fixing quality issue of https://github.com/pinterf/mvtools/issues/59 for non-4:4:4 sources).

At first VS may still not support simple MDegrainN from old 2.7.45 era. As in latest commit in 2023 https://github.com/dubhater/vapoursy...f67254580b7ab9 it is only start to support fixed-tr to 6. It is lightyears behind latest end of 2022 MDegrainN already having 'spatial' multi-pass blending modes and going to go into 'temporal' multi-pass blending as a next step.

Also the most useful features like several interpolated overlap modes (at the MDegrain stage - running MAnalyse in max speed non-overlapped or using hardware accelerator of MVs from MPEG encoder chip) including new quality/performance balanced mode of 'diagonal' overlap having only 2x number of blocks and giving close to blksize/2 4x overlap mode quality in old mvtools overlap design. Also runtime sub-sample shifting allowing to save host RAM traffic and expensive onboard CPU caches trashing with pre-calculated subsample refined planes. The required for correct pel=4 UV processing in 4:2:0 granularity is pel/8. So if designed in old way via MSuper it will bug memory subsystem even more.

Also running MVs analysis in 'large' tr of 10 and more opens better possibility to perform intermediate linear or non-linear MVs grading in time axis after MAnalyse and before MDegrain (MVLPF current implemented feature of MDegrainN). Using too few tr for 1..6 gives too low timed samples of MVs to make good FIR convolution linear LPF.

Last edited by DTL; 6th March 2023 at 22:54.
DTL is offline   Reply With Quote
Old 7th March 2023, 13:44   #4795  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
Okay, so in conclusion with https://github.com/dubhater/vapoursynth-mvtools EZDenoise can't be ported atm. .
Sad news, but I will have to live with it.
Thanks.
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 8th April 2023, 16:28   #4796  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
Myrsloik, can we get a weight parameter for std.MaskedMerge like there is for std.Merge?
__________________
(i have a tendency to drunk post)
mastrboy is offline   Reply With Quote
Old 8th April 2023, 22:21   #4797  |  Link
Myrsloik
Professional Code Monkey
 
Myrsloik's Avatar
 
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,548
Quote:
Originally Posted by mastrboy View Post
Myrsloik, can we get a weight parameter for std.MaskedMerge like there is for std.Merge?
What? Tbe weight is already in the mask
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet
Myrsloik is offline   Reply With Quote
Old 9th April 2023, 15:48   #4798  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
He, probably, wants a weight since he isn't using a grayscale, but a binary mask,...
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 9th April 2023, 20:42   #4799  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
Quote:
Originally Posted by Myrsloik View Post
What? Tbe weight is already in the mask
Might just be some lack of knowledge on my understanding of how masks works...

Consider the following mask/merge:
Code:
edge_mask = core.tedgemask.TEdgeMask(video, link=1, threshold=5.0).std.Maximum(threshold=128)
edge_merged = core.std.MaskedMerge(clipa=video, clipb=filtered, mask=edge_mask, planes=[0, 1, 2], first_plane=True)
output = core.std.Merge(clipa=video, clipb=edge_merged, weight=[0.45])
If I can control the weight in the mask itself, how would that code snippet look if I wanted to replace the "core.std.Merge(clipa=video, clipb=edge_merged, weight=[0.45]" part by reducing the mask weight 45%?
__________________
(i have a tendency to drunk post)
mastrboy is offline   Reply With Quote
Old 15th April 2023, 13:39   #4800  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
@masterboy: TEdgeMask by default creates a binary mask, but if you set threshold to 0 you can set scale,... doesn't that do what you are aiming for?
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Reply

Tags
speed, vaporware, vapoursynth

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 20:35.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.