evaluation
Describe the bug i evaluate minigpt-v2 finetuned with gqa, this is minigptv2_benchmark_evaluation.yaml '' model: arch: minigpt_v2 model_type: pretrain max_txt_len: 500 end_sym: "" low_resource: False prompt_template: '[INST] {} [/INST]' llama_model: "/data1/khw/minigpt-v2/Llama-2-7b-chat-hf" ckpt: "/data1/khw/minigpt-v2/output/gqa_20epoch/20240323123/checkpoint_17.pth" lora_r: 64 lora_alpha: 16
datasets:
gqa:
batch_size: 6
vis_processor:
train:
name: "blip2_image_train"
image_size: 448
text_processor:
train:
name: "blip_caption"
sample_ratio: 50
evaluation_datasets:
gqa:
eval_file_path: /data1/khw/gqa/
img_path: /data1/khw/gqa/gqa_images/train
max_new_tokens: 20
batch_size: 10
okvqa:
eval_file_path: /data1/khw/okvqa
img_path: /data1/khw/coco/images/train2017/train2017
max_new_tokens: 20
batch_size: 10
run: task: image_text_pretrain name: minigptv2_evaluation save_path: /data1/khw/minigpt-v2/evaluation/gqa '' my /data1/khw/gqa is ├── gqa │ └── test_balanced_questions.json │ ├── testdev_balanced_questions.json │ ├── gqa_images
i run
CUDA_VISIBLE_DEVICES=1,2 torchrun --master-port 0 --nproc_per_node 2 eval_vqa.py
--cfg-path /home/khw/vlm/MiniGPT-4/eval_configs/minigptv2_benchmark_evaluation.yaml --dataset okvqa,gqa
it shows
Traceback (most recent call last):
File "/home/khw/vlm/MiniGPT-4/eval_scripts/eval_vqa.py", line 153, in