model-compression topic
qsparse
Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules
Why-the-State-of-Pruning-so-Confusing
[Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
DepthShrinker
[ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan Fu, Haichuan Yang, Jiayi Yuan, Meng Li, Cheng Wan, Raghuraman K...
LGTM
[ACL 2023] Code for paper “Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation”(https://arxiv.org/abs/2305.09651)
Good-DA-in-KD
[NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective
ESNAC
Learnable Embedding Space for Efficient Neural Architecture Compression
SNIP-it
This repository is the official implementation of the paper Pruning via Iterative Ranking of Sensitivity Statistics and implements novel pruning / compression algorithms for deep learning / neural net...
Awesome-EdgeAI
Resources of our survey paper "A Systematic Review of AI Deployment on Resource-Constrained Edge Devices: Challenges, Techniques, and Applications"
TPP
[ICLR'23] Trainability Preserving Neural Pruning (PyTorch)
CHIP_NeurIPS2021
Code for CHIP: CHannel Independence-based Pruning for Compact Neural Networks (NeruIPS 2021).