View Single Post
Old 24th June 2018, 23:05   #4  |  Link
blurred
Registered User
 
Join Date: Jul 2016
Posts: 14
While nobody doubts that Google has no problem eating real inventors for breakfast, this patent situation brings important questions regarding the future of video/image compression:
1) Can Google's competition now safely consider ANS for image/video compression, e.g. MPEG for h.266?
2) Is it technically a good idea: first switching from binary CABAC to e.g. 16 size alphabet in AV1? If so, which entropy coder would be more appropriate for such 16 size alphabet?

For AV1 there has finally won daala range coder - using 16 multiplications per symbol (by CDF for all symbols), and so turning out ~7x slower in software than better compression obtained by rANS: https://sites.google.com/site/powturbo/entropy-coder
In contrast, rANS uses only one multiplication per symbol (by f[s] = CDF[s+1] - CDF[s]), being much faster at least in software (more energy efficient in hardware), but requiring additional buffer for encoding (e.g. once per thousand/million of views of youtube/netflix video). Also 16 size alphabet is used for example in Dropbox DivANS: https://blogs.dropbox.com/tech/2018/...r-with-divans/

Here is one discussion which can be summarized that Google behaves like dog in the manger: showed lack of competence to use rANS savings, and tries to patent it so that competition also cannot - https://encode.ru/threads/1890-Bench...ll=1#post56945
Here is another: https://forum.doom9.org/showthread.p...47#post1845147
blurred is offline   Reply With Quote