[Bug]: qwen2-vl 7b, on vllm 0.8.1 & 0.8.2, sometimes (not deterministically but depends on data) I got: ValueError: Attempted to assign 702 = 702 multimodal tokens to 703 placeholders
Your current environment
🐛 Describe the bug
I have:
enforce_eager: false
enable_chunked_prefill: false
But still got the "ValueError" thing.
Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/15185 is a similar issue (but on Qwen2.5-VL)
Can you provide the prompt and image to reproduce this bug?
i got the same error
seems we have to disable chunked-prefill on V0 mode, V1 work fine with chunked-prefill, but V0 failed.
ERROR 03-31 16:21:41 [engine.py:160] ValueError('Attempted to assign 5460 = 5460 multimodal tokens to 5099 placeholders')
ERROR 03-31 16:21:41 [engine.py:160] Traceback (most recent call last):
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 158, in start
ERROR 03-31 16:21:41 [engine.py:160] self.run_engine_loop()
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 221, in run_engine_loop
ERROR 03-31 16:21:41 [engine.py:160] request_outputs = self.engine_step()
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 247, in engine_step
ERROR 03-31 16:21:41 [engine.py:160] raise e
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 230, in engine_step
ERROR 03-31 16:21:41 [engine.py:160] return self.engine.step()
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 1430, in step
ERROR 03-31 16:21:41 [engine.py:160] outputs = self.model_executor.execute_model(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 139, in execute_model
ERROR 03-31 16:21:41 [engine.py:160] output = self.collective_rpc("execute_model",
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
ERROR 03-31 16:21:41 [engine.py:160] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 2313, in run_method
ERROR 03-31 16:21:41 [engine.py:160] return func(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 420, in execute_model
ERROR 03-31 16:21:41 [engine.py:160] output = self.model_runner.execute_model(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 03-31 16:21:41 [engine.py:160] return func(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1770, in execute_model
ERROR 03-31 16:21:41 [engine.py:160] hidden_or_intermediate_states = model_executable(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 03-31 16:21:41 [engine.py:160] return self._call_impl(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
ERROR 03-31 16:21:41 [engine.py:160] return forward_call(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1080, in forward
ERROR 03-31 16:21:41 [engine.py:160] inputs_embeds = self.get_input_embeddings_v0(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1015, in get_input_embeddings_v0
ERROR 03-31 16:21:41 [engine.py:160] inputs_embeds = merge_multimodal_embeddings(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 455, in merge_multimodal_embeddings
ERROR 03-31 16:21:41 [engine.py:160] return _merge_multimodal_embeddings(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
ERROR 03-31 16:21:41 [engine.py:160] raise ValueError(
ERROR 03-31 16:21:41 [engine.py:160] ValueError: Attempted to assign 5460 = 5460 multimodal tokens to 5099 placeholders
Yes, chunked prefill is not supported on V0. V1 should work fine though.
btw, I also tested it by switching to V0. V0 works fine, so the issue is with V1.
Can you show the error log?
completions: List[RequestOutput] = self.inference_engine.generate(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 1072, in inner
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 465, in generate
outputs = self._run_engine(use_tqdm=use_tqdm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1375, in _run_engine
step_outputs = self.llm_engine.step()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/llm_engine.py", line 220, in step
outputs = self.engine_core.get_output()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 167, in get_output
return self.engine_core.step()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 195, in step
output = self.model_executor.execute_model(scheduler_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/executor/abstract.py", line 77, in execute_model
output = self.collective_rpc("execute_model",
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
answer = run_method(self.driver_worker, method, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 2255, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 242, in execute_model
output = self.model_runner.execute_model(scheduler_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1000, in execute_model
inputs_embeds = self.model.get_input_embeddings(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1303, in get_input_embeddings
inputs_embeds = merge_multimodal_embeddings(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 449, in merge_multimodal_embeddings
return _merge_multimodal_embeddings(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
raise ValueError(
ValueError: Attempted to assign 1369 + 1369 = 2738 multimodal tokens to 2739 placeholders
Possibly related to #15677
I have seen this occur when sending random inputs to the model, one might accidentally include the <|image|> token in the random distribution leading to errors. If not this, maybe there is an issue with V1 chunked prefill for multimodal?
Update: after switching to V0, it can run for longer without such errors. But after some time, I still got the error:
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1379, in forward
inputs_embeds = self.get_input_embeddings_v0(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1317, in get_input_embeddings_v0
inputs_embeds = merge_multimodal_embeddings(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 455, in merge_multimodal_embeddings
return _merge_multimodal_embeddings(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
raise ValueError(
ValueError: Attempted to assign 1369 + 1369 + 1369 + 1369 = 5476 multimodal tokens to 5477 placeholders
I have seen this occur when sending random inputs to the model, one might accidentally include the <|image|> token in the random distribution leading to errors. If not this, maybe there is an issue with V1 chunked prefill for multimodal?
@benchislett I double checked and made sure that the input doesn't contain accidentally added <|image_pad|>. Actually, if there are such tokens, it won't pass other checks in the code.
@DarkLight1337 @Isotr0py Hi guys, I understand that this issue might be specific to Qwen and might be hard to fix. Rather than locating the issue in the code and fixing it, is there a workaround like: if such cases is encountered, vllm skips this data point and continue to inference? (rather than directly failing). Many thanks!
Updates: I figured that this might have something to do with special tokens being generated. I'm working on a fix but setting a small list of "bad_words" can cause CUDA OOM (https://github.com/vllm-project/vllm/issues/15976).
Any fix or workaround until now? Fix here (https://github.com/vllm-project/vllm/pull/16229) seems to be not completed yet.
You can set top_p to avoid sampling the image tokens
i have met the same bug in the version 0.8.5, let me see how to fix this.
I have recived you mail, thanks. I'll reply you soon.
i have met the same bug in the version 0.8.5, let me see how to fix this.
my problem have been solved, it is a request bug. some content added an extra 'image_pad' token... 0.8.5 works well.
@panjiacheng @FerryHuang @xsank Have you found a fix?
- I'm using
vllm==0.8.2withV1and still see this issue - I'm sure the inputs don't contain any extra
"<|image_pad|>" - This seems due to generated tokens and happens probabilistically, which makes it hard to reproduce
- None of the attempts below to avoid generating image tokens fix the issue
img_id = 151655
sampling_params=SamplingParams(
bad_words=["<|image_pad|>"], # doesn't work
stop_token_ids=[img_id], # doesn't work
logit_bias={ # doesn't work
img_id: -100.0,
},
),
@DarkLight1337 Any idea?
I think it may be because the multimodal embeddings are merged into the text embeddings before sampling is done. So none of the sampling parameters can avoid this problem. The fix is still WIP
I think it may be because the multimodal embeddings are merged into the text embeddings before sampling is done. So none of the sampling parameters can avoid this problem. The fix is still WIP
@DarkLight1337 Hello, has this issue been resolved?
No, @ywang96 is still working on refactoring the scheduler to avoid this problem.
same issue, any update? thanks a lot!
any updates? same issue here
Same issue with vllm==0.8.5. Has the bug been fixed?
Just an update to the thread - sorry for the delay but this bug hasn't been fixed yet but we plan to get it fixed by the end of Q3!
@panjiacheng, @ywang96 @DarkLight1337 Hi, has this issue been resolved?
@Chen-Song Unfortunately we haven't got to fix this due to our bandwidth - but we already have a tracking PR for it https://github.com/vllm-project/vllm/pull/16229