smoothquant topic
List
smoothquant repositories
neural-compressor
2.2k
Stars
254
Forks
24
Watchers
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
llmc
308
Stars
32
Forks
Watchers
[EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".