NeuralCompression
NeuralCompression copied to clipboard
A collection of tools for neural compression enthusiasts.
Presently, `pmf_to_quantized_cdf` is serial and requires the user to do two additional and expensive copies by expecting a std::vector and returning a std::vector. Both of these issues can be resolved...
## Enhancement Thanks for this wonderful work. However, is there any guidance on training the VQ-VAE in MS-ILLM ?
Hi, my computer configuration is not enough to complete the training of a large number of data. I just want to simply test the effect of torch_vct. Could you provide...
We would like to have an implementation of the following paper: [Image Compression with Product Quantized Masked Image Modeling](https://arxiv.org/abs/2212.07372) Alaaeldin El-Nouby, Matthew J. Muckley, Karen Ullrich, Ivan Laptev, Jakob Verbeek,...
Currently [our implementation of FID/256](https://github.com/facebookresearch/NeuralCompression/blob/main/neuralcompression/metrics/_update_patch_fid.py) relies on the `torchmetrics` FID, which includes image interpolation. For images of size 256, this doesn't cause a huge difference, but many standard methods don't...
As seen in this automated test, there is a bug in the FID calculation with scipy 1.11.2: https://github.com/facebookresearch/NeuralCompression/actions/runs/5905240241/job/16018960652 This bug results in a huge imaginary component from the sqrtm operation....