disco icon indicating copy to clipboard operation
disco copied to clipboard

support quantized models

Open tharvik opened this issue 1 year ago • 0 comments

currently, we use pretty much float32 tensors all around, which yields pretty huge models. after discussion with @martinjaggi, training is hard to do without float32, but inference can probably utilize uint8 tensors, dividing up to 4x the size of trained models.

note: check that the model is still behaving correctly after quantization

tharvik avatar Oct 24 '24 12:10 tharvik