Qwen-VL
Qwen-VL copied to clipboard
测试数据格式询问evaluate_grounding.py
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [X] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
当我用自己的测试数据,用Qwen-VL\eval_mm\evaluate_grounding.py进行测试,但是无法正确读取数据,我的数据格式为:{"id": "identity_0", "conversations": [{"from": "user", "value": "Picture 1: /datasets/data_qwen/NIH_vg/images/test/00013118_008.png\nOutline the location of the disease in the image."}, {"from": "assistant", "value": "Atelectasis
/datasets/data_qwen/NIH_vg/images/test/00029817_009.png\nOutline the location of the disease in the image."}, {"from": "assistant", "value": "Atelectasis
/datasets/data_qwen/NIH_vg/images/test/00012515_002.png\nPlease provide a disease visual grounding of the picture."}, {"from": "assistant", "value": "Atelectasis
/datasets/data_qwen/NIH_vg/images/test/00007557_026.png\nGenerate a disease visual grounding for this image."}, {"from": "assistant", "value": "Atelectasis
/datasets/data_qwen/NIH_vg/images/test/00009669_003.png\nPlease provide a disease visual grounding of the picture."}, {"from": "assistant", "value": "Atelectasis
期望行为 | Expected Behavior
改用的数据格式,能够正确测试
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
Python: 3.8
Transformers: 4.32.0
PyTorch: 2.0.0
备注 | Anything else?
No response
如果是测试的话,使用test.jsonl
格式应该是这样的:
{"image": "图片路径", "bbox": [[279, 134, 358, 231], [28, 93, 121, 221], [0, 371, 99, 497]], "height": 512, "width": 512}
关于测试使用的Prompt,可以在evaluate_grounding.py找到Prompt设置
@danjuan-77 感谢回复,我还有两个问题想要请教您~ 第一个就是关于grounding微调是否也需要加上 "height": 512, "width": 512,我之前微调没有加上这个参数 第二个想询问下关于evaluate_vqa.py的测试数据格式 感激不尽!
@danjuan-77 感谢回复,我还有两个问题想要请教您~ 第一个就是关于grounding微调是否也需要加上 "height": 512, "width": 512,我之前微调没有加上这个参数 第二个想询问下关于evaluate_vqa.py的测试数据格式 感激不尽!
关于数据格式,可以参考这个文件中的说明:eval_mm/EVALUATION.md
里面有Qwen测试数据的下载地址可以参考一下,grounding微调是需要加入图片的宽高参数,因为Qwen需要对坐标进行归一化,在evaluate_grounding.py大约246行可以看到。
由于我只做了目标检测相关的任务微调,vqa我不太了解,eval_mm/EVALUATION.md
里面应该会有相关数据下载地址,可以下载下来看看。
@danjuan-77 好嘞感谢!我刚刚正在看
你好,请问您在使用Qwen-VL\eval_mm\evaluate_grounding.py进行测试试的命令是什么呢?
我在evaluate_caption.py测试格式是:
ds="nocaps"
checkpoint=/data/checkpoint/qwen/Qwen-VL-Chat
python -m torch.distributed.run
--nproc_per_node 1
--nnodes 1
--node_rank 0
--master_addr ${MASTER_ADDR:-127.0.0.1}
--master_port 12345
evaluate_caption.py
--checkpoint $checkpoint
--dataset $ds
--batch-size 8
--num-workers 2
数据集也已经下载好但是会出现没有图片地址的报错:
Traceback (most recent call last):
File "/data/checkpoint/qwen/Qwen-VL-Chat/eval_mm/evaluate_caption.py", line 143, in
pred = model.generate(
File "/root/.cache/huggingface/modules/transformers_modules/Qwen-VL-Chat/modeling_qwen.py", line 1058, in generate
return super().generate(
File "/root/conda/envs/llm/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/conda/envs/llm/lib/python3.9/site-packages/transformers/generation/utils.py", line 1722, in generate
return self.beam_sample(
File "/root/conda/envs/llm/lib/python3.9/site-packages/transformers/generation/utils.py", line 3350, in beam_sample
outputs = self(
File "/root/conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/Qwen-VL-Chat/modeling_qwen.py", line 848, in forward
transformer_outputs = self.transformer(
File "/root/conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/Qwen-VL-Chat/modeling_qwen.py", line 565, in forward
images = self.visual.encode(images)
File "/root/.cache/huggingface/modules/transformers_modules/Qwen-VL-Chat/visual.py", line 422, in encode
image = Image.open(image_path)
File "/root/conda/envs/llm/lib/python3.9/site-packages/PIL/Image.py", line 3247, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'data/nocaps/val/0013ea2087020901.jpg'
请问您有遇到过吗?
这个错误FileNotFoundError: [Errno 2] No such file or directory: 'data/nocaps/val/0013ea2087020901.jpg',文件地址你得对应上,修改为自己下载保存的路径
@yihp 地址是已经修改好的,在evaluate_caption.py的ds_collections部分,按照mkdir -p data/nocaps && cd data/nocaps的,地址已经设置为:/data/checkpoint/qwen/Qwen-VL-Chat/data/nocaps/nocaps_val.json。。但是报错的是data/nocaps/val/0013ea2087020901.jpg。下载好的nocaps文件夹下是没有val这个子文件。只有一个nocaps_val.json。