View Single Post
Old 21st April 2019, 15:04   #1630  |  Link
SmilingWolf
I am maddo saientisto!
 
SmilingWolf's Avatar
 
Join Date: Aug 2018
Posts: 95
Quote:
Originally Posted by dapperdan View Post
VMAF isn't designed for still images, but they do provide the tools to create your own VMAF for specific use cases (e.g. anime on a phone screen, or video game cobtebt) so it surprises me that no one has taken the framework and applied it to still images yet.

It should in theory be able to fuse the results of those other still image tests and create something even better aligned with human reported scores than any one alone. Presumably not Netflix's main use case but you'd think they deliver enough still images to make it worthwhile since they already have the skills.
I was looking into this very matter earlier today and the main problem is, as always for this kind of problems, the lack of high quality MOS datasets. In particular, the only "extensive" dataset I've found is TID2013, and even that only comprises of 2 kinds of image compression distortions, for 25 images, at 5 intensities = 250 distorted images and relative scores.

When calculating the SROCC for only the "compression" distortions (JPEG and J2K) these are the results:
Code:
--- top 33%
PSNRHA          0.9686
DSSIM          -0.9683
PSNRHVS         0.9677
PSNRHMA         0.9651
PSNRHVSM        0.9603
FSIMc           0.9589
FSIM            0.9580
VMAF_rb_v0.6.3  0.9524
SSIMULACRA     -0.9519
VMAF_v0.6.1     0.9505
WSNR            0.9468
--- middle
MSSIM           0.9427
VIFP            0.9380
--- low 33%
PSNRc           0.9200
CQM             0.9190
PSNR            0.9170
VSNR            0.9162
SSIM            0.9147
NQM             0.9023
I also tweeted to Jon Sneyers about the dataset they used to validate SSIMULACRA, will see if he can release it indipendently of a blogpost that now, after two years, is probably not going to happen.
SmilingWolf is offline   Reply With Quote