vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Bug]: qwen2-vl 7b, on vllm 0.8.1 & 0.8.2, sometimes (not deterministically but depends on data) I got: ValueError: Attempted to assign 702 = 702 multimodal tokens to 703 placeholders

Open panjiacheng opened this issue 8 months ago • 16 comments

Your current environment

completions: List[RequestOutput] = self.inference_engine.generate( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 1072, in inner return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 465, in generate outputs = self._run_engine(use_tqdm=use_tqdm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1375, in _run_engine step_outputs = self.llm_engine.step() ^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/llm_engine.py", line 220, in step outputs = self.engine_core.get_output() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 167, in get_output return self.engine_core.step() ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 195, in step output = self.model_executor.execute_model(scheduler_output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/executor/abstract.py", line 77, in execute_model output = self.collective_rpc("execute_model", ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 2255, in run_method return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 242, in execute_model output = self.model_runner.execute_model(scheduler_output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1000, in execute_model inputs_embeds = self.model.get_input_embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1303, in get_input_embeddings inputs_embeds = merge_multimodal_embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 449, in merge_multimodal_embeddings return _merge_multimodal_embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings raise ValueError( ValueError: Attempted to assign 702 = 702 multimodal tokens to 703 placeholders

🐛 Describe the bug

I have:

enforce_eager: false
enable_chunked_prefill: false

But still got the "ValueError" thing.

Before submitting a new issue...

  • [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

panjiacheng avatar Mar 30 '25 08:03 panjiacheng

https://github.com/vllm-project/vllm/issues/15185 is a similar issue (but on Qwen2.5-VL)

panjiacheng avatar Mar 30 '25 08:03 panjiacheng

Can you provide the prompt and image to reproduce this bug?

Isotr0py avatar Mar 30 '25 08:03 Isotr0py

i got the same error

DefTruth avatar Mar 31 '25 08:03 DefTruth

seems we have to disable chunked-prefill on V0 mode, V1 work fine with chunked-prefill, but V0 failed.

ERROR 03-31 16:21:41 [engine.py:160] ValueError('Attempted to assign 5460 = 5460 multimodal tokens to 5099 placeholders')
ERROR 03-31 16:21:41 [engine.py:160] Traceback (most recent call last):
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 158, in start
ERROR 03-31 16:21:41 [engine.py:160]     self.run_engine_loop()
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 221, in run_engine_loop
ERROR 03-31 16:21:41 [engine.py:160]     request_outputs = self.engine_step()
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 247, in engine_step
ERROR 03-31 16:21:41 [engine.py:160]     raise e
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 230, in engine_step
ERROR 03-31 16:21:41 [engine.py:160]     return self.engine.step()
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 1430, in step
ERROR 03-31 16:21:41 [engine.py:160]     outputs = self.model_executor.execute_model(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 139, in execute_model
ERROR 03-31 16:21:41 [engine.py:160]     output = self.collective_rpc("execute_model",
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
ERROR 03-31 16:21:41 [engine.py:160]     answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 2313, in run_method
ERROR 03-31 16:21:41 [engine.py:160]     return func(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 420, in execute_model
ERROR 03-31 16:21:41 [engine.py:160]     output = self.model_runner.execute_model(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 03-31 16:21:41 [engine.py:160]     return func(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1770, in execute_model
ERROR 03-31 16:21:41 [engine.py:160]     hidden_or_intermediate_states = model_executable(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 03-31 16:21:41 [engine.py:160]     return self._call_impl(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
ERROR 03-31 16:21:41 [engine.py:160]     return forward_call(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1080, in forward
ERROR 03-31 16:21:41 [engine.py:160]     inputs_embeds = self.get_input_embeddings_v0(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1015, in get_input_embeddings_v0
ERROR 03-31 16:21:41 [engine.py:160]     inputs_embeds = merge_multimodal_embeddings(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 455, in merge_multimodal_embeddings
ERROR 03-31 16:21:41 [engine.py:160]     return _merge_multimodal_embeddings(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
ERROR 03-31 16:21:41 [engine.py:160]     raise ValueError(
ERROR 03-31 16:21:41 [engine.py:160] ValueError: Attempted to assign 5460 = 5460 multimodal tokens to 5099 placeholders

DefTruth avatar Mar 31 '25 08:03 DefTruth

Yes, chunked prefill is not supported on V0. V1 should work fine though.

DarkLight1337 avatar Apr 01 '25 04:04 DarkLight1337

btw, I also tested it by switching to V0. V0 works fine, so the issue is with V1.

panjiacheng avatar Apr 01 '25 05:04 panjiacheng

Can you show the error log?

DarkLight1337 avatar Apr 01 '25 06:04 DarkLight1337

    completions: List[RequestOutput] = self.inference_engine.generate(
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 1072, in inner
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 465, in generate
    outputs = self._run_engine(use_tqdm=use_tqdm)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1375, in _run_engine
    step_outputs = self.llm_engine.step()
                   ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/llm_engine.py", line 220, in step
    outputs = self.engine_core.get_output()
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 167, in get_output
    return self.engine_core.step()
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 195, in step
    output = self.model_executor.execute_model(scheduler_output)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/executor/abstract.py", line 77, in execute_model
    output = self.collective_rpc("execute_model",
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
    answer = run_method(self.driver_worker, method, args, kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 2255, in run_method
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 242, in execute_model
    output = self.model_runner.execute_model(scheduler_output)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1000, in execute_model
    inputs_embeds = self.model.get_input_embeddings(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1303, in get_input_embeddings
    inputs_embeds = merge_multimodal_embeddings(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 449, in merge_multimodal_embeddings
    return _merge_multimodal_embeddings(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
    raise ValueError(
ValueError: Attempted to assign 1369 + 1369 = 2738 multimodal tokens to 2739 placeholders

panjiacheng avatar Apr 01 '25 07:04 panjiacheng

Possibly related to #15677

DarkLight1337 avatar Apr 01 '25 08:04 DarkLight1337

I have seen this occur when sending random inputs to the model, one might accidentally include the <|image|> token in the random distribution leading to errors. If not this, maybe there is an issue with V1 chunked prefill for multimodal?

benchislett avatar Apr 01 '25 16:04 benchislett

Update: after switching to V0, it can run for longer without such errors. But after some time, I still got the error:

  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1379, in forward
    inputs_embeds = self.get_input_embeddings_v0(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1317, in get_input_embeddings_v0
    inputs_embeds = merge_multimodal_embeddings(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 455, in merge_multimodal_embeddings
    return _merge_multimodal_embeddings(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
    raise ValueError(
ValueError: Attempted to assign 1369 + 1369 + 1369 + 1369 = 5476 multimodal tokens to 5477 placeholders

panjiacheng avatar Apr 01 '25 16:04 panjiacheng

I have seen this occur when sending random inputs to the model, one might accidentally include the <|image|> token in the random distribution leading to errors. If not this, maybe there is an issue with V1 chunked prefill for multimodal?

@benchislett I double checked and made sure that the input doesn't contain accidentally added <|image_pad|>. Actually, if there are such tokens, it won't pass other checks in the code.

panjiacheng avatar Apr 01 '25 16:04 panjiacheng

@DarkLight1337 @Isotr0py Hi guys, I understand that this issue might be specific to Qwen and might be hard to fix. Rather than locating the issue in the code and fixing it, is there a workaround like: if such cases is encountered, vllm skips this data point and continue to inference? (rather than directly failing). Many thanks!

panjiacheng avatar Apr 01 '25 23:04 panjiacheng

Updates: I figured that this might have something to do with special tokens being generated. I'm working on a fix but setting a small list of "bad_words" can cause CUDA OOM (https://github.com/vllm-project/vllm/issues/15976).

panjiacheng avatar Apr 03 '25 01:04 panjiacheng

Any fix or workaround until now? Fix here (https://github.com/vllm-project/vllm/pull/16229) seems to be not completed yet.

FerryHuang avatar Apr 12 '25 09:04 FerryHuang

You can set top_p to avoid sampling the image tokens

DarkLight1337 avatar Apr 12 '25 09:04 DarkLight1337

i have met the same bug in the version 0.8.5, let me see how to fix this.

xsank avatar May 08 '25 12:05 xsank

I have recived you mail, thanks. I'll reply you soon.

whitelok avatar May 08 '25 12:05 whitelok

i have met the same bug in the version 0.8.5, let me see how to fix this.

my problem have been solved, it is a request bug. some content added an extra 'image_pad' token... 0.8.5 works well.

xsank avatar May 12 '25 01:05 xsank

@panjiacheng @FerryHuang @xsank Have you found a fix?

  • I'm using vllm==0.8.2 with V1 and still see this issue
  • I'm sure the inputs don't contain any extra "<|image_pad|>"
  • This seems due to generated tokens and happens probabilistically, which makes it hard to reproduce
  • None of the attempts below to avoid generating image tokens fix the issue
img_id = 151655
sampling_params=SamplingParams(
    bad_words=["<|image_pad|>"],  # doesn't work
    stop_token_ids=[img_id],  # doesn't work
    logit_bias={  # doesn't work
        img_id: -100.0,
   },
),

@DarkLight1337 Any idea?

theophilegervet avatar May 13 '25 02:05 theophilegervet

I think it may be because the multimodal embeddings are merged into the text embeddings before sampling is done. So none of the sampling parameters can avoid this problem. The fix is still WIP

DarkLight1337 avatar May 13 '25 03:05 DarkLight1337

I think it may be because the multimodal embeddings are merged into the text embeddings before sampling is done. So none of the sampling parameters can avoid this problem. The fix is still WIP

@DarkLight1337 Hello, has this issue been resolved?

nono-Sang avatar Jul 09 '25 10:07 nono-Sang

No, @ywang96 is still working on refactoring the scheduler to avoid this problem.

DarkLight1337 avatar Jul 09 '25 10:07 DarkLight1337

same issue, any update? thanks a lot!

tiesanguaixia avatar Jul 21 '25 14:07 tiesanguaixia

any updates? same issue here

Yangruipis avatar Jul 24 '25 07:07 Yangruipis

Same issue with vllm==0.8.5. Has the bug been fixed?

zheng980629 avatar Jul 29 '25 02:07 zheng980629

Just an update to the thread - sorry for the delay but this bug hasn't been fixed yet but we plan to get it fixed by the end of Q3!

ywang96 avatar Jul 29 '25 02:07 ywang96

@panjiacheng, @ywang96 @DarkLight1337 Hi, has this issue been resolved?

Chen-Song avatar Sep 18 '25 06:09 Chen-Song

@Chen-Song Unfortunately we haven't got to fix this due to our bandwidth - but we already have a tracking PR for it https://github.com/vllm-project/vllm/pull/16229

ywang96 avatar Sep 18 '25 07:09 ywang96