vector-quantize-pytorch
vector-quantize-pytorch copied to clipboard
`bfloat16` cannot utilize some codes
trafficstars
When using FSQ with [8, 5, 5, 5] levels, and in pytorch-lightning specifying bfloat16 training, the codebook utilization scratches 50% from below, while when training with float32 it scratches 100%.
I don't know if there is any issue with the implementation or just a limitation with the FSQ, in any case I would guess that this library should force float32 for the quantization step.