model-compression topic
condensa
Programmable Neural Network Compression
aquvitae
Knowledge Distillation Toolkit
Awesome-Quantization-Papers
List of papers related to neural network quantization in recent AI conferences and journals.
BitPack
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.
HAWQ
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
pytorch-weights_pruning
PyTorch Implementation of Weights Pruning
nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
PocketFlow
An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
awesome-knowledge-distillation
Awesome Knowledge Distillation