FAT_Quantization
FAT_Quantization copied to clipboard
Pytorch implementation for FAT: learning low-bitwidth parametric representation via frequency-aware transformation
Results
2
FAT_Quantization issues
Sort by
recently updated
recently updated
newest added
What's the accuracy drop if I'm training 4-bit mobilenet_v2 from full-precision when compared to initialized from 5bit model?