qa-lora
qa-lora copied to clipboard
Official PyTorch implementation of QA-LoRA
Already replaced the auto_gptq/utils/peft_utils.py with give script (peft_utils.py) While attempting to execute qalora.py with default setup its giving following error (as target_modules=None, Line No 315 qalora.py): auto_gptq/utils/peft_utils.py", line 409, in...
Want to know how to train using AWQ+LoRA? Currently, I haven't found any research supporting the training of the LoRa module with AWQ.
Dear Sir, Thanks for your sharing of this great work. I followed the instruction in readme.md that using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) to quantize the llama-2-7b-chat model and applied lora finetuning, but I...
Hi, I try to reproduce the llama7b finetune on MMLU, I get the maximum 5-shot eval accuracy is 36.5% with 4bits, 32.8% with 3bits. Could you please kindly specify your...
Hi I try to train 3bit llama7b, ``` model = AutoGPTQForCausalLM.from_quantized( args.model_path, device_map='auto', max_memory=max_memory, trust_remote_code=args.trust_remote_code, inject_fused_attention = False, inject_fused_mlp = False, use_triton=True, warmup_triton=False, trainable=True ) ``` reports error when trying...
Hello! The calculation accuracy of QLora training is float16, what is the calculation accuracy of qa-lora training? My fine-tuning TechGPT-7b was successful with QLora, but using qa-lora always reported the...
I finetuned https://huggingface.co/TheBloke/Llama-2-7B-GPTQ on 4090 using the code from this repo and modified the group_size in peft_utils.py, but it seems cannot converge. Only pass the learning rate = 3e-05 to...
Hello, I'm curious why AB can merge to zero point, can you give a detailed derivation process? As far as I know, LoRA can fuse AB into W, why can...
Hi Team, thanks for help. Finetuning Vision Foundation Models like OWL-ViT & Grounding Dino is possible? Any reference code is available?
Hi, 1. When I use the command on 8 gpus: ``` python3 qalora.py --model_path $llama_7b_4bit_g32 ``` it will show the error: ``` File "/home/shawn/anaconda3/envs/qalora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 830, in forward logits =...