peft icon indicating copy to clipboard operation
peft copied to clipboard

how to fine tune LoRA HQQ?

Open NickyDark1 opened this issue 1 year ago • 1 comments

Feature request

how to fine tune LoRA to HQQ?

Motivation

how to fine tune LoRA to HQQ?

Your contribution

how to fine tune LoRA to HQQ?

NickyDark1 avatar May 21 '24 02:05 NickyDark1

Load the model from transformers with quantization_config=HqqConfig(...), the rest is the same. Here is an example:

https://github.com/huggingface/peft/blob/fb7f2796e5411ee86588447947d1fdd5b6395cad/tests/test_gpu_examples.py#L2386C28-L2428

BenjaminBossan avatar May 21 '24 09:05 BenjaminBossan

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

github-actions[bot] avatar Jun 20 '24 15:06 github-actions[bot]