Looks promising. It definitely has a more "high-res" look to it than any other non-GAN algorithms I've seen so far. Of course a key question will be how much the artifacts will go down when fully trained.
As much as I like comparisons to NGU, it's probably not very fair, because NGU was created to run in real time. @feisty, how long does it take to upscale one 1080p video frame (using the Tesla), just to get a first impression about speed?
(And if I may suggest, try using L1 loss instead of L2 loss.)
|