FlashRAG icon indicating copy to clipboard operation
FlashRAG copied to clipboard

运行多模态的rum_mm_exp.py报错

Open thunderbolt-fire opened this issue 7 months ago • 1 comments

(flashrag) (base) root@d9f0ae0008bf:~/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/examples/run_mm# python rum_mm_exp.py
Index is empty!!
Loading dataset from: /root/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/FlashRAG_Dataset/mmqa...
Loading dev dataset from: /root/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/FlashRAG_Dataset/mmqa/dev.parquet...
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████| 5/5 [00:04<00:00,  1.23it/s]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading bm25 retriever...
No `contents` field found in corpus, using `text` instead.
No `contents` field found in corpus, using `text` instead.
Loading jina-clip-v2 retriever...
arch:  jinaclipmodel
Loading JinaCLIPModel from /root/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/models/jina-clip-v2
/root/.cache/huggingface/modules/transformers_modules/jinaai/jina-clip-implementation/51f02de9f2cf8afcd3bac4ce996859ba96f9f8e9/modeling_clip.py:140: UserWarning: Flash attention is not installed. Check https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#installation-and-features for installation instructions, disabling
  warnings.warn(
/root/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/flashrag/generator/utils.py:36: UserWarning: max_tokens (200) and max_new_tokens (128) are different. Using max_new_tokens value as it has priority.
  warnings.warn(
Generation process:   0%|                                                                          | 0/230 [00:00<?, ?it/s]/opt/conda/envs/flashrag/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:631: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
  warnings.warn(
/opt/conda/envs/flashrag/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:653: UserWarning: `do_sample` is set to `False`. However, `top_k` is set to `1` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_k`.
  warnings.warn(
Generation process:   0%|                                                                          | 0/230 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/root/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/examples/run_mm/rum_mm_exp.py", line 197, in <module>
    func(args)
  File "/root/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/examples/run_mm/rum_mm_exp.py", line 173, in mmqa
    dataset = pipeline.naive_run(dataset)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/flashrag/pipeline/mm_pipeline.py", line 54, in naive_run
    pred_answer_list = self.generator.generate(input_prompts)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/flashrag/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/root/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/flashrag/generator/multimodal_generator.py", line 447, in generate
    output_responses.extend(self.inference_engine.generate(batch_prompts, **generation_params))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/flashrag/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/root/siton-data-0553377b2d664236bad5b5d0ba8aa419/workspace/FlashRAG/flashrag/generator/multimodal_generator.py", line 108, in generate
    outputs = self.model.generate(
              ^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/flashrag/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/flashrag/lib/python3.11/site-packages/transformers/generation/utils.py", line 2465, in generate
    result = self._sample(
             ^^^^^^^^^^^^^
  File "/opt/conda/envs/flashrag/lib/python3.11/site-packages/transformers/generation/utils.py", line 3424, in _sample
    model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/flashrag/lib/python3.11/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1764, in prepare_inputs_for_generation
    model_inputs = super().prepare_inputs_for_generation(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/flashrag/lib/python3.11/site-packages/transformers/generation/utils.py", line 507, in prepare_inputs_for_generation
    inputs_embeds, input_ids = self._cache_dependant_input_preparation(
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/flashrag/lib/python3.11/site-packages/transformers/generation/utils.py", line 406, in _cache_dependant_input_preparation
    or (cache_position[-1] >= input_ids.shape[1])  # Exception 3
        ~~~~~~~~~~~~~~^^^^
IndexError: index -1 is out of bounds for dimension 0 with size 0

使用的生成器为 Qwen2-VL-7B

thunderbolt-fire avatar May 12 '25 02:05 thunderbolt-fire

这边使用Qwen2-VL-7B-Instruct版本测试是正常滴,看了下报错问题好像是transformers版本不一样,你可以用4.48.3版本试一下,模型也要使用instruct版本

lihai-zhao avatar May 19 '25 08:05 lihai-zhao