LLaVA
LLaVA copied to clipboard
[Usage] KeyError: 'LlavaConfig' when evaluating with model_vqa.py
Describe the issue
Hi! I'm attempting to re-train LLaVA on some of my own data but to start, I've just been training on the repo's data. I've pretrained and finetuned (with lora) without trouble but when I attempt to evaluate my lora finetuned model, I run into some trouble.
python model_vqa.py --model-path checkpoints/llava-v1.5-13b-lora/ --question-file playground/data/coco 2014_val_qa_eval/qa90_questions.jsonl --image-folder playground/data/coco2014_val_qa_eval/val2014/ --answers-file my_answers.jsonl --model-base liuhaotian/llava-v1.5-13b-lora
Gives error:
/opt/conda/lib/python3.11/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /opt/conda/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
[2024-01-25 18:05:55,136] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
model_path checkpoints/llava-v1.5-13b-lora/, model_name llava-v1.5-13b-lora, model_base liuhaotian/llava-v1.5-13b-lora
Traceback (most recent call last):
File "/mnt/ceph_rbd/LLaVA/model_vqa.py", line 114, in
Do you have any suggestions on how to fix this error?
@emmaking-smith Did you manage to fix this?
has anyone fixed it?
Yes, I think the solution is as follows. I found the answer from a previous question https://github.com/haotian-liu/LLaVA/issues/963.
They key here is that --model-base is NOT liuhaotian/llava-v1.5-13b-lora but lmsys/vicuna-13b-v1.5, which you can see is the case in the config.json file that's deposited after model training.