Bunny
Bunny copied to clipboard
Training the model throws an error after quantization
When I use 8-bit quantization in the pre-training process, the code throws an error.
You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft for more details