UI-TARS-desktop icon indicating copy to clipboard operation
UI-TARS-desktop copied to clipboard

vllm部署之后请求400 bad request

Open learningpro opened this issue 11 months ago • 1 comments

vllm版本是0.7.3 客户端是最新版

INFO: 172.17.0.1:57208 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request INFO 03-13 11:21:13 metrics.py:455] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%. INFO 03-13 11:21:25 logger.py:39] Received request chatcmpl-eaf5717dfe3f41c784f6bdce0c215b91: prompt: 'User: You are a GUI agent. You are given a task and your action history, with screenshots. You need to perform the next action to complete the task.\n\n## Output Format\n\nThought: ...\nAction: ...\n\n\n## Action Space\nclick(start_box='[x1, y1, x2, y2]')\nleft_double(start_box='[x1, y1, x2, y2]')\nright_single(start_box='[x1, y1, x2, y2]')\ndrag(start_box='[x1, y1, x2, y2]', end_box='[x3, y3, x4, y4]')\nhotkey(key='')\ntype(content='') #If you want to submit your input, use "\n" at the end of content.\nscroll(start_box='[x1, y1, x2, y2]', direction='down or up or right or left')\nwait() #Sleep for 5s and take a screenshot to check for any changes.\nfinished()\ncall_user() # Submit the task and call the user when the task is unsolvable, or when you need the user's help.\n\n## Note\n- Use Chinese in Thought part.\n- Write a small plan and finally summarize your next action (with its target element) in one sentence in Thought part.\n\n## User Instruction\n打开pt.sjtu.edu.cn 查看 体育 相关分类有哪些新内容<|im_end|>\nUser: <|vision_start|><|image_pad|><|vision_end|><|im_end|>\nAssistant: ', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=1000, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None. INFO 03-13 11:21:25 engine.py:280] Added request chatcmpl-eaf5717dfe3f41c784f6bdce0c215b91. INFO 03-13 11:21:28 metrics.py:455] Avg prompt throughput: 337.4 tokens/s, Avg generation throughput: 10.8 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.4%, CPU KV cache usage: 0.0%. INFO: 172.17.0.1:39776 - "POST /v1/chat/completions HTTP/1.1" 200 OK ERROR 03-13 11:21:31 serving_chat.py:197] Error in preprocessing prompt inputs ERROR 03-13 11:21:31 serving_chat.py:197] Traceback (most recent call last): ERROR 03-13 11:21:31 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/serving_chat.py", line 181, in create_chat_completion ERROR 03-13 11:21:31 serving_chat.py:197] ) = await self._preprocess_chat( ERROR 03-13 11:21:31 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/serving_engine.py", line 388, in _preprocess_chat ERROR 03-13 11:21:31 serving_chat.py:197] conversation, mm_data_future = parse_chat_messages_futures( ERROR 03-13 11:21:31 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 951, in parse_chat_messages_futures ERROR 03-13 11:21:31 serving_chat.py:197] sub_messages = _parse_chat_message_content( ERROR 03-13 11:21:31 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 879, in _parse_chat_message_content ERROR 03-13 11:21:31 serving_chat.py:197] result = _parse_chat_message_content_parts( ERROR 03-13 11:21:31 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 782, in _parse_chat_message_content_parts ERROR 03-13 11:21:31 serving_chat.py:197] parse_res = _parse_chat_message_content_part( ERROR 03-13 11:21:31 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 839, in _parse_chat_message_content_part ERROR 03-13 11:21:31 serving_chat.py:197] mm_parser.parse_image(str_content) ERROR 03-13 11:21:31 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 574, in parse_image ERROR 03-13 11:21:31 serving_chat.py:197] placeholder = self._tracker.add("image", image_coro) ERROR 03-13 11:21:31 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 452, in add ERROR 03-13 11:21:31 serving_chat.py:197] raise ValueError( ERROR 03-13 11:21:31 serving_chat.py:197] ValueError: At most 1 image(s) may be provided in one request. INFO: 172.17.0.1:39788 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request ERROR 03-13 11:21:33 serving_chat.py:197] Error in preprocessing prompt inputs ERROR 03-13 11:21:33 serving_chat.py:197] Traceback (most recent call last): ERROR 03-13 11:21:33 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/serving_chat.py", line 181, in create_chat_completion ERROR 03-13 11:21:33 serving_chat.py:197] ) = await self._preprocess_chat( ERROR 03-13 11:21:33 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/serving_engine.py", line 388, in _preprocess_chat ERROR 03-13 11:21:33 serving_chat.py:197] conversation, mm_data_future = parse_chat_messages_futures( ERROR 03-13 11:21:33 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 951, in parse_chat_messages_futures ERROR 03-13 11:21:33 serving_chat.py:197] sub_messages = _parse_chat_message_content( ERROR 03-13 11:21:33 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 879, in _parse_chat_message_content ERROR 03-13 11:21:33 serving_chat.py:197] result = _parse_chat_message_content_parts( ERROR 03-13 11:21:33 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 782, in _parse_chat_message_content_parts ERROR 03-13 11:21:33 serving_chat.py:197] parse_res = _parse_chat_message_content_part( ERROR 03-13 11:21:33 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 839, in _parse_chat_message_content_part ERROR 03-13 11:21:33 serving_chat.py:197] mm_parser.parse_image(str_content) ERROR 03-13 11:21:33 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 574, in parse_image ERROR 03-13 11:21:33 serving_chat.py:197] placeholder = self._tracker.add("image", image_coro) ERROR 03-13 11:21:33 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 452, in add ERROR 03-13 11:21:33 serving_chat.py:197] raise ValueError( ERROR 03-13 11:21:33 serving_chat.py:197] ValueError: At most 1 image(s) may be provided in one request. INFO: 172.17.0.1:39790 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request ERROR 03-13 11:21:37 serving_chat.py:197] Error in preprocessing prompt inputs ERROR 03-13 11:21:37 serving_chat.py:197] Traceback (most recent call last): ERROR 03-13 11:21:37 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/serving_chat.py", line 181, in create_chat_completion ERROR 03-13 11:21:37 serving_chat.py:197] ) = await self._preprocess_chat( ERROR 03-13 11:21:37 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/serving_engine.py", line 388, in _preprocess_chat ERROR 03-13 11:21:37 serving_chat.py:197] conversation, mm_data_future = parse_chat_messages_futures( ERROR 03-13 11:21:37 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 951, in parse_chat_messages_futures ERROR 03-13 11:21:37 serving_chat.py:197] sub_messages = _parse_chat_message_content( ERROR 03-13 11:21:37 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 879, in _parse_chat_message_content ERROR 03-13 11:21:37 serving_chat.py:197] result = _parse_chat_message_content_parts( ERROR 03-13 11:21:37 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 782, in _parse_chat_message_content_parts ERROR 03-13 11:21:37 serving_chat.py:197] parse_res = _parse_chat_message_content_part( ERROR 03-13 11:21:37 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 839, in _parse_chat_message_content_part ERROR 03-13 11:21:37 serving_chat.py:197] mm_parser.parse_image(str_content) ERROR 03-13 11:21:37 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 574, in parse_image ERROR 03-13 11:21:37 serving_chat.py:197] placeholder = self._tracker.add("image", image_coro) ERROR 03-13 11:21:37 serving_chat.py:197] File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/chat_utils.py", line 452, in add ERROR 03-13 11:21:37 serving_chat.py:197] raise ValueError( ERROR 03-13 11:21:37 serving_chat.py:197] ValueError: At most 1 image(s) may be provided in one request. INFO: 172.17.0.1:37548 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request

learningpro avatar Mar 13 '25 03:03 learningpro

ERROR 03-13 11:48:37 serving_chat.py:158] Error in applying chat template from request ERROR 03-13 11:48:37 serving_chat.py:158] Traceback (most recent call last): ERROR 03-13 11:48:37 serving_chat.py:158] File "/mnt/afs2/my_conda_env/tooluse/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 126, in create_chat_completion ERROR 03-13 11:48:37 serving_chat.py:158] conversation, mm_data_future = parse_chat_messages_futures( ERROR 03-13 11:48:37 serving_chat.py:158] File "/mnt/afs2/my_conda_env/tooluse/lib/python3.10/site-packages/vllm/entrypoints/chat_utils.py", line 529, in parse_chat_messages_futures ERROR 03-13 11:48:37 serving_chat.py:158] sub_messages = _parse_chat_message_content(msg, mm_tracker) ERROR 03-13 11:48:37 serving_chat.py:158] File "/mnt/afs2/my_conda_env/tooluse/lib/python3.10/site-packages/vllm/entrypoints/chat_utils.py", line 464, in _parse_chat_message_content ERROR 03-13 11:48:37 serving_chat.py:158] result = _parse_chat_message_content_parts( ERROR 03-13 11:48:37 serving_chat.py:158] File "/mnt/afs2/my_conda_env/tooluse/lib/python3.10/site-packages/vllm/entrypoints/chat_utils.py", line 416, in _parse_chat_message_content_parts ERROR 03-13 11:48:37 serving_chat.py:158] mm_parser.parse_image(image_url["url"]) ERROR 03-13 11:48:37 serving_chat.py:158] File "/mnt/afs2/my_conda_env/tooluse/lib/python3.10/site-packages/vllm/entrypoints/chat_utils.py", line 298, in parse_image ERROR 03-13 11:48:37 serving_chat.py:158] placeholder = self._tracker.add("image", image_coro) ERROR 03-13 11:48:37 serving_chat.py:158] File "/mnt/afs2/my_conda_env/tooluse/lib/python3.10/site-packages/vllm/entrypoints/chat_utils.py", line 207, in add ERROR 03-13 11:48:37 serving_chat.py:158] raise ValueError( ERROR 03-13 11:48:37 serving_chat.py:158] ValueError: At most 1 image(s) may be provided in one request. INFO: 111.202.148.129:12485 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request

遇到了同样问题

Hansen06 avatar Mar 13 '25 11:03 Hansen06