FLIF
FLIF copied to clipboard
Ideas for improving lossy encoding
I just learnt that @jonsneyers is working on improving efficiency in low-quality encoding case. I had some ideas too, which I want to try some time. Just noting them down here in case anyone wants to pursue it or combine with other strategies.
- Do a bit of gaussian filter over the image (especially the chroma channels) prior to the lossy encoding step. This might reduce mis-predictions in a more visually friendly way.
- Run a edge detection filter and use the edge information to tune the lossiness per pixel. Pixels on edges should be less lossy.
Thanks for the ideas. I just committed https://github.com/FLIF-hub/FLIF/commit/9644035bb13e7861d9dc5d7a2fa294ca0de50156 which improves low-quality lossy encoding.
- Gaussian filter: to some extent, we are already doing something like this. The effect of the preprocessing step starting here: https://github.com/FLIF-hub/FLIF/blob/master/src/flif-enc.cpp#L459 is essentially to do some kind of selective blurring: if the loss is enough to make a pixel at a deeper zoomlevel become equal to its predictor, then some of the pixel gets transferred to its neighbors (from a higher-level zoomlevel).
- Edge detection: as it is now, edges are reasonably well-preserved because most of the loss is only done in smooth areas (where the difference between predicted and actual value is below the threshold), while in the non-smooth area, the loss is more subtle (dropping some least-significant bits of the difference, but that's OK on edges). So I don't think this will help much; maybe even the opposite is true: I think we might be "trying too hard" to preserve edges, at least at low qualities.
https://people.xiph.org/~jm/daala/pvq_demo/#pvqsteps_cap Not sure if it's already being used, but the Activity Masking is a must-have technique for good lossy compression.