exo
exo copied to clipboard
Tinygrad Quantization Support [WIP]
What I did:
- Define custom layers for Affine Quantized models, including integer weights, float16 scales and biases (zero point correction)
- Load MLX-Community quantized model and unpack the weights.
- Write the forward logic for quantized layers, following this paper (see section 2.3)
Todo:
- [ ] write tests, test with multiple nodes and different llama models
- [x] support 4-bit quantization
- [ ] do the forward math in Integer (see section 2.2 of mentioned paper)
With this first commit, you can run exo --run-model="llama-3.2-1b-8bit" with tinygrad backend and "mlx-community" model.
Also, I'm doing math in float32 right now, which adds overhead. when I change it to float16, I think something overflows and model outputs nothing. I will fix this.
This is a great start - I tested this and it works. That's awesome because that means we can support any mlx model in tinygrad.
Are you sending parameters in float32 to the GPU? Or are they being sent in fp8? Just wondering what kind of speed to expect here. Wonder how close this gets to MLX quantized