peft
peft copied to clipboard
how to fine tune LoRA HQQ?
Feature request
how to fine tune LoRA to HQQ?
Motivation
how to fine tune LoRA to HQQ?
Your contribution
how to fine tune LoRA to HQQ?
Load the model from transformers with quantization_config=HqqConfig(...), the rest is the same. Here is an example:
https://github.com/huggingface/peft/blob/fb7f2796e5411ee86588447947d1fdd5b6395cad/tests/test_gpu_examples.py#L2386C28-L2428
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.