scale-adjusted-training
scale-adjusted-training copied to clipboard
PyTorch implementation of Towards Efficient Training for Neural Network Quantization
scale-adjusted-training
PyTorch implementation of Towards Efficient Training for Neural Network Quantization
Introduction
This repo implement the Scale-Adjusted Training from Towards Efficient Training for Neural Network Quantization including:
- Constant rescaling Dorefa-quantize
- Calibrated gradient PACT
TODO
- [x] constant rescaling DoReFaQuantize layer
- [x] CGPACT layer
- [ ] test with mobilenetv1
- [ ] test with mobilenetv2
- [ ] test with resnet50
Acknowledgement
- https://github.com/marvis/pytorch-mobilenet
- https://github.com/tonylins/pytorch-mobilenet-v2
- https://github.com/ricky40403/PyTransformer/tree/hotfix