[Bug]: Local vllm model error
Version
1.0.0
Model
UI-TARS-2B-SFT
Deployment Method
Local
Issue Description
本地vllm部署模型,MacBook Pro M1。 UI-Tars提示词:Please tell me the weather in FS through web browser.
vllm执行报错,UI-Tars报错。
Error Logs
vllm:
/opt/homebrew/Cellar/[email protected]/3.11.11/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
~/github/vllm main ❯ vllm serve /Users/shaunxu/huggingface/UI-TARS-2B-SFT 4m 35s vllm
INFO 03-26 16:00:01 [__init__.py:239] Automatically detected platform cpu.
INFO 03-26 16:00:02 [api_server.py:981] vLLM API server version 0.8.3.dev3+gd20e26119
INFO 03-26 16:00:02 [api_server.py:982] args: Namespace(subparser='serve', model_tag='/Users/shaunxu/huggingface/UI-TARS-2B-SFT', config='', host=None, port=8000, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/Users/shaunxu/huggingface/UI-TARS-2B-SFT', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_config=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_cascade_attn=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x16c667920>)
INFO 03-26 16:00:02 [config.py:2586] For macOS with Apple Silicon, currently bfloat16 is not supported. Setting dtype to float16.
WARNING 03-26 16:00:02 [config.py:2617] Casting torch.bfloat16 to torch.float16.
INFO 03-26 16:00:05 [config.py:588] This model supports multiple tasks: {'embed', 'reward', 'score', 'generate', 'classify'}. Defaulting to 'generate'.
WARNING 03-26 16:00:05 [arg_utils.py:1843] device type=cpu is not supported by the V1 Engine. Falling back to V0.
WARNING 03-26 16:00:05 [cpu.py:97] Environment variable VLLM_CPU_KVCACHE_SPACE (GiB) for CPU backend is not set, using 4 by default.
WARNING 03-26 16:00:05 [cpu.py:110] uni is not supported on CPU, fallback to mp distributed executor backend.
INFO 03-26 16:00:05 [api_server.py:241] Started engine process with PID 5991
INFO 03-26 16:00:06 [__init__.py:239] Automatically detected platform cpu.
INFO 03-26 16:00:07 [llm_engine.py:241] Initializing a V0 LLM engine (v0.8.3.dev3+gd20e26119) with config: model='/Users/shaunxu/huggingface/UI-TARS-2B-SFT', speculative_config=None, tokenizer='/Users/shaunxu/huggingface/UI-TARS-2B-SFT', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/Users/shaunxu/huggingface/UI-TARS-2B-SFT, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=None, chunked_prefill_enabled=False, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
INFO 03-26 16:00:07 [cpu.py:43] Using Torch SDPA backend.
INFO 03-26 16:00:07 [importing.py:16] Triton not installed or not compatible; certain GPU-related functions will not be available.
INFO 03-26 16:00:08 [parallel_state.py:954] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0
INFO 03-26 16:00:08 [config.py:3246] cudagraph sizes specified by model runner [] is overridden by config [256, 128, 2, 1, 4, 136, 8, 144, 16, 152, 24, 160, 32, 168, 40, 176, 48, 184, 56, 192, 64, 200, 72, 208, 80, 216, 88, 120, 224, 96, 232, 104, 240, 112, 248]
WARNING 03-26 16:00:08 [cpu.py:97] Environment variable VLLM_CPU_KVCACHE_SPACE (GiB) for CPU backend is not set, using 4 by default.
Loading safetensors checkpoint shards: 0% Completed | 0/2 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 50% Completed | 1/2 [00:04<00:04, 4.70s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:08<00:00, 4.10s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:08<00:00, 4.19s/it]
INFO 03-26 16:00:16 [loader.py:447] Loading weights took 8.39 seconds
INFO 03-26 16:00:16 [executor_base.py:111] # cpu blocks: 9362, # CPU blocks: 0
INFO 03-26 16:00:16 [executor_base.py:116] Maximum concurrency for 32768 tokens per request: 4.57x
INFO 03-26 16:00:17 [llm_engine.py:447] init engine (profile, create kv cache, warmup model) took 0.65 seconds
INFO 03-26 16:00:17 [api_server.py:1028] Starting vLLM API server on http://0.0.0.0:8000
INFO 03-26 16:00:17 [launcher.py:26] Available routes are:
INFO 03-26 16:00:17 [launcher.py:34] Route: /openapi.json, Methods: HEAD, GET
INFO 03-26 16:00:17 [launcher.py:34] Route: /docs, Methods: HEAD, GET
INFO 03-26 16:00:17 [launcher.py:34] Route: /docs/oauth2-redirect, Methods: HEAD, GET
INFO 03-26 16:00:17 [launcher.py:34] Route: /redoc, Methods: HEAD, GET
INFO 03-26 16:00:17 [launcher.py:34] Route: /health, Methods: GET
INFO 03-26 16:00:17 [launcher.py:34] Route: /load, Methods: GET
INFO 03-26 16:00:17 [launcher.py:34] Route: /ping, Methods: POST, GET
INFO 03-26 16:00:17 [launcher.py:34] Route: /tokenize, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /detokenize, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /v1/models, Methods: GET
INFO 03-26 16:00:17 [launcher.py:34] Route: /version, Methods: GET
INFO 03-26 16:00:17 [launcher.py:34] Route: /v1/chat/completions, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /v1/completions, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /v1/embeddings, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /pooling, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /score, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /v1/score, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /v1/audio/transcriptions, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /rerank, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /v1/rerank, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /v2/rerank, Methods: POST
INFO 03-26 16:00:17 [launcher.py:34] Route: /invocations, Methods: POST
INFO: Started server process [5980]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO 03-26 16:01:02 [chat_utils.py:379] Detected the chat template content format to be 'openai'. You can set `--chat-template-content-format` to override this.
INFO 03-26 16:01:02 [logger.py:39] Received request chatcmpl-b261327e07524e8dac606bdc1c173ea2: prompt: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nYou are a GUI agent. You are given a task and your action history, with screenshots. You need to perform the next action to complete the task.\n\n## Output Format\n```\nThought: ...\nAction: ...\n```\n\n## Action Space\nclick(start_box=\'[x1, y1, x2, y2]\')\nleft_double(start_box=\'[x1, y1, x2, y2]\')\nright_single(start_box=\'[x1, y1, x2, y2]\')\ndrag(start_box=\'[x1, y1, x2, y2]\', end_box=\'[x3, y3, x4, y4]\')\nhotkey(key=\'\')\ntype(content=\'\') #If you want to submit your input, use "\\n" at the end of `content`.\nscroll(start_box=\'[x1, y1, x2, y2]\', direction=\'down or up or right or left\')\nwait() #Sleep for 5s and take a screenshot to check for any changes.\nfinished()\ncall_user() # Submit the task and call the user when the task is unsolvable, or when you need the user\'s help.\n\n## Note\n- Use English in `Thought` part.\n- Write a small plan and finally summarize your next action (with its target element) in one sentence in `Thought` part.\n\n## User Instruction\nFind the weather of SF in web browser<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|><|im_end|>\n<|im_start|>assistant\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=1000, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
INFO 03-26 16:01:03 [engine.py:310] Added request chatcmpl-b261327e07524e8dac606bdc1c173ea2.
WARNING 03-26 16:05:19 [cpu.py:154] Pin memory is not supported on CPU.
INFO 03-26 16:05:19 [metrics.py:481] Avg prompt throughput: 6.4 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.1%, CPU KV cache usage: 0.0%.
INFO 03-26 16:05:24 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 14.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.2%, CPU KV cache usage: 0.0%.
INFO 03-26 16:05:29 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 14.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.2%, CPU KV cache usage: 0.0%.
INFO 03-26 16:05:34 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.3%, CPU KV cache usage: 0.0%.
INFO 03-26 16:05:39 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.1 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.3%, CPU KV cache usage: 0.0%.
INFO 03-26 16:05:44 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.1 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.4%, CPU KV cache usage: 0.0%.
INFO 03-26 16:05:50 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 12.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.4%, CPU KV cache usage: 0.0%.
INFO 03-26 16:05:55 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.4%, CPU KV cache usage: 0.0%.
INFO 03-26 16:06:00 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.3 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.5%, CPU KV cache usage: 0.0%.
INFO 03-26 16:06:05 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 12.8 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.5%, CPU KV cache usage: 0.0%.
INFO 03-26 16:06:10 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.6%, CPU KV cache usage: 0.0%.
INFO 03-26 16:06:15 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.2 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.6%, CPU KV cache usage: 0.0%.
INFO 03-26 16:06:20 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.1 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.7%, CPU KV cache usage: 0.0%.
INFO 03-26 16:06:25 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.3 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.7%, CPU KV cache usage: 0.0%.
INFO 03-26 16:06:30 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.8%, CPU KV cache usage: 0.0%.
INFO 03-26 16:06:35 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.1 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO: 127.0.0.1:54603 - "POST /v1/chat/completions HTTP/1.1" 200 OK
ERROR 03-26 16:06:36 [serving_chat.py:201] Error in preprocessing prompt inputs
ERROR 03-26 16:06:36 [serving_chat.py:201] Traceback (most recent call last):
ERROR 03-26 16:06:36 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/openai/serving_chat.py", line 185, in create_chat_completion
ERROR 03-26 16:06:36 [serving_chat.py:201] ) = await self._preprocess_chat(
ERROR 03-26 16:06:36 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:36 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/openai/serving_engine.py", line 391, in _preprocess_chat
ERROR 03-26 16:06:36 [serving_chat.py:201] conversation, mm_data_future = parse_chat_messages_futures(
ERROR 03-26 16:06:36 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:36 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1120, in parse_chat_messages_futures
ERROR 03-26 16:06:36 [serving_chat.py:201] sub_messages = _parse_chat_message_content(
ERROR 03-26 16:06:36 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:36 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1048, in _parse_chat_message_content
ERROR 03-26 16:06:36 [serving_chat.py:201] result = _parse_chat_message_content_parts(
ERROR 03-26 16:06:36 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:36 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 948, in _parse_chat_message_content_parts
ERROR 03-26 16:06:36 [serving_chat.py:201] parse_res = _parse_chat_message_content_part(
ERROR 03-26 16:06:36 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:36 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1005, in _parse_chat_message_content_part
ERROR 03-26 16:06:36 [serving_chat.py:201] mm_parser.parse_image(str_content)
ERROR 03-26 16:06:36 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 706, in parse_image
ERROR 03-26 16:06:36 [serving_chat.py:201] placeholder = self._tracker.add("image", image_coro)
ERROR 03-26 16:06:36 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:36 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 529, in add
ERROR 03-26 16:06:36 [serving_chat.py:201] raise ValueError(
ERROR 03-26 16:06:36 [serving_chat.py:201] ValueError: At most 1 image(s) may be provided in one request.
/Users/shaunxu/github/vllm/vllm/entrypoints/openai/serving_chat.py:202: RuntimeWarning: coroutine 'MediaConnector.fetch_image_async' was never awaited
return self.create_error_response(str(e))
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
INFO: 127.0.0.1:54603 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
ERROR 03-26 16:06:38 [serving_chat.py:201] Error in preprocessing prompt inputs
ERROR 03-26 16:06:38 [serving_chat.py:201] Traceback (most recent call last):
ERROR 03-26 16:06:38 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/openai/serving_chat.py", line 185, in create_chat_completion
ERROR 03-26 16:06:38 [serving_chat.py:201] ) = await self._preprocess_chat(
ERROR 03-26 16:06:38 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:38 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/openai/serving_engine.py", line 391, in _preprocess_chat
ERROR 03-26 16:06:38 [serving_chat.py:201] conversation, mm_data_future = parse_chat_messages_futures(
ERROR 03-26 16:06:38 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:38 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1120, in parse_chat_messages_futures
ERROR 03-26 16:06:38 [serving_chat.py:201] sub_messages = _parse_chat_message_content(
ERROR 03-26 16:06:38 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:38 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1048, in _parse_chat_message_content
ERROR 03-26 16:06:38 [serving_chat.py:201] result = _parse_chat_message_content_parts(
ERROR 03-26 16:06:38 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:38 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 948, in _parse_chat_message_content_parts
ERROR 03-26 16:06:38 [serving_chat.py:201] parse_res = _parse_chat_message_content_part(
ERROR 03-26 16:06:38 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:38 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1005, in _parse_chat_message_content_part
ERROR 03-26 16:06:38 [serving_chat.py:201] mm_parser.parse_image(str_content)
ERROR 03-26 16:06:38 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 706, in parse_image
ERROR 03-26 16:06:38 [serving_chat.py:201] placeholder = self._tracker.add("image", image_coro)
ERROR 03-26 16:06:38 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:38 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 529, in add
ERROR 03-26 16:06:38 [serving_chat.py:201] raise ValueError(
ERROR 03-26 16:06:38 [serving_chat.py:201] ValueError: At most 1 image(s) may be provided in one request.
INFO: 127.0.0.1:54603 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
ERROR 03-26 16:06:41 [serving_chat.py:201] Error in preprocessing prompt inputs
ERROR 03-26 16:06:41 [serving_chat.py:201] Traceback (most recent call last):
ERROR 03-26 16:06:41 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/openai/serving_chat.py", line 185, in create_chat_completion
ERROR 03-26 16:06:41 [serving_chat.py:201] ) = await self._preprocess_chat(
ERROR 03-26 16:06:41 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:41 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/openai/serving_engine.py", line 391, in _preprocess_chat
ERROR 03-26 16:06:41 [serving_chat.py:201] conversation, mm_data_future = parse_chat_messages_futures(
ERROR 03-26 16:06:41 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:41 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1120, in parse_chat_messages_futures
ERROR 03-26 16:06:41 [serving_chat.py:201] sub_messages = _parse_chat_message_content(
ERROR 03-26 16:06:41 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:41 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1048, in _parse_chat_message_content
ERROR 03-26 16:06:41 [serving_chat.py:201] result = _parse_chat_message_content_parts(
ERROR 03-26 16:06:41 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:41 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 948, in _parse_chat_message_content_parts
ERROR 03-26 16:06:41 [serving_chat.py:201] parse_res = _parse_chat_message_content_part(
ERROR 03-26 16:06:41 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:41 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1005, in _parse_chat_message_content_part
ERROR 03-26 16:06:41 [serving_chat.py:201] mm_parser.parse_image(str_content)
ERROR 03-26 16:06:41 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 706, in parse_image
ERROR 03-26 16:06:41 [serving_chat.py:201] placeholder = self._tracker.add("image", image_coro)
ERROR 03-26 16:06:41 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:41 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 529, in add
ERROR 03-26 16:06:41 [serving_chat.py:201] raise ValueError(
ERROR 03-26 16:06:41 [serving_chat.py:201] ValueError: At most 1 image(s) may be provided in one request.
INFO: 127.0.0.1:54603 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
INFO 03-26 16:06:45 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
ERROR 03-26 16:06:47 [serving_chat.py:201] Error in preprocessing prompt inputs
ERROR 03-26 16:06:47 [serving_chat.py:201] Traceback (most recent call last):
ERROR 03-26 16:06:47 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/openai/serving_chat.py", line 185, in create_chat_completion
ERROR 03-26 16:06:47 [serving_chat.py:201] ) = await self._preprocess_chat(
ERROR 03-26 16:06:47 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:47 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/openai/serving_engine.py", line 391, in _preprocess_chat
ERROR 03-26 16:06:47 [serving_chat.py:201] conversation, mm_data_future = parse_chat_messages_futures(
ERROR 03-26 16:06:47 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:47 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1120, in parse_chat_messages_futures
ERROR 03-26 16:06:47 [serving_chat.py:201] sub_messages = _parse_chat_message_content(
ERROR 03-26 16:06:47 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:47 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1048, in _parse_chat_message_content
ERROR 03-26 16:06:47 [serving_chat.py:201] result = _parse_chat_message_content_parts(
ERROR 03-26 16:06:47 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:47 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 948, in _parse_chat_message_content_parts
ERROR 03-26 16:06:47 [serving_chat.py:201] parse_res = _parse_chat_message_content_part(
ERROR 03-26 16:06:47 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:47 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 1005, in _parse_chat_message_content_part
ERROR 03-26 16:06:47 [serving_chat.py:201] mm_parser.parse_image(str_content)
ERROR 03-26 16:06:47 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 706, in parse_image
ERROR 03-26 16:06:47 [serving_chat.py:201] placeholder = self._tracker.add("image", image_coro)
ERROR 03-26 16:06:47 [serving_chat.py:201] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 16:06:47 [serving_chat.py:201] File "/Users/shaunxu/github/vllm/vllm/entrypoints/chat_utils.py", line 529, in add
ERROR 03-26 16:06:47 [serving_chat.py:201] raise ValueError(
ERROR 03-26 16:06:47 [serving_chat.py:201] ValueError: At most 1 image(s) may be provided in one request.
INFO: 127.0.0.1:54729 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
^CINFO 03-26 17:14:40 [launcher.py:74] Shutting down FastAPI HTTP server.
Exception ignored in: <function Socket.__del__ at 0x14bc5cf40>
Traceback (most recent call last):
File "/Users/shaunxu/github/vllm/.venv/lib/python3.11/site-packages/zmq/sugar/socket.py", line 181, in __del__
def __del__(self):
File "/Users/shaunxu/github/vllm/vllm/engine/multiprocessing/engine.py", line 426, in signal_handler
raise KeyboardInterrupt("MQLLMEngine terminated")
KeyboardInterrupt: MQLLMEngine terminated
INFO: Shutting down
^CINFO: Waiting for application shutdown.
INFO: Application shutdown complete.
UI-Tars
~/github/UI-TARS-desktop main ❯ npm run dev:ui-tars ✘ INT
npm warn Unknown project config "shamefully-hoist". This will stop working in the next major version of npm.
npm warn Unknown project config "node-linker". This will stop working in the next major version of npm.
> [email protected] dev:ui-tars
> turbo run ui-tars-desktop#dev
turbo 2.4.4
• Packages in scope: @agent-infra/bing-search, @agent-infra/browser, @agent-infra/browser-search, @agent-infra/browser-use, @agent-infra/duckduckgo-search, @agent-infra/logger, @agent-infra/mcp-client, @agent-infra/mcp-server-browser, @agent-infra/mcp-server-commands, @agent-infra/mcp-server-filesystem, @agent-infra/mcp-server-shared, @agent-infra/search, @agent-infra/shared, @common/configs, @common/electron-build, @ui-tars/action-parser, @ui-tars/cli, @ui-tars/electron-ipc, @ui-tars/operator-browser, @ui-tars/operator-browserbase, @ui-tars/operator-nut-js, @ui-tars/sdk, @ui-tars/shared, @ui-tars/utio, agent-tars-app, open-agent-renderer, ui-tars-desktop, ui-tars-desktop-renderer
• Running ui-tars-desktop#dev in 28 packages
• Remote caching disabled
ui-tars-desktop:dev: cache bypass, force executing 93ecd95b31c306c6
ui-tars-desktop:dev:
ui-tars-desktop:dev: > [email protected] dev /Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars
ui-tars-desktop:dev: > electron-vite dev
ui-tars-desktop:dev:
ui-tars-desktop:dev: vite v6.2.2 building SSR bundle for development...
../../node_modules/file-type/core.js (1419:16): Use of eval in "../../node_modules/file-type/core.js" is strongly discouraged as it poses security risks and may cause issues with minification.
✓ 1369 modules transformed.
dist/main/index-CtsU6c2f.js 0.68 kB
ui-tars-desktop:dev: dist/main/fileFromPath-1g7i6PBK.js 4.66 kB
ui-tars-desktop:dev: dist/main/index-D7bHkL9b.js 20.76 kB
ui-tars-desktop:dev: dist/main/source-map-support-BuJwx0ie.js 80.80 kB
ui-tars-desktop:dev: dist/main/systemPermissions-CJorBJOQ.js 105.88 kB
ui-tars-desktop:dev: dist/main/index-D2rmeTgy.js 282.26 kB
ui-tars-desktop:dev: dist/main/main.js 3,332.52 kB
ui-tars-desktop:dev: ✓ built in 2.82s
ui-tars-desktop:dev:
ui-tars-desktop:dev: build the electron main process successfully
ui-tars-desktop:dev:
ui-tars-desktop:dev: -----
ui-tars-desktop:dev:
ui-tars-desktop:dev: vite v6.2.2 building SSR bundle for development...
✓ 1 modules transformed.
dist/preload/index.js 2.02 kB
ui-tars-desktop:dev: ✓ built in 6ms
ui-tars-desktop:dev:
ui-tars-desktop:dev: build the electron preload files successfully
ui-tars-desktop:dev:
ui-tars-desktop:dev: -----
ui-tars-desktop:dev:
dev server running for the electron renderer process at:
ui-tars-desktop:dev:
ui-tars-desktop:dev: ➜ Local: http://localhost:5173/
ui-tars-desktop:dev: ➜ Network: use --host to expose
ui-tars-desktop:dev:
ui-tars-desktop:dev: start electron app...
ui-tars-desktop:dev:
ui-tars-desktop:dev: 16:00:28.529 (main) › [env] {
ui-tars-desktop:dev: forceDownload: false,
ui-tars-desktop:dev: isDev: true,
ui-tars-desktop:dev: isE2eTest: false,
ui-tars-desktop:dev: isLinux: false,
ui-tars-desktop:dev: isMacOS: true,
ui-tars-desktop:dev: isProd: false,
ui-tars-desktop:dev: isWindows: false,
ui-tars-desktop:dev: isWindows11: false,
ui-tars-desktop:dev: mode: 'development',
ui-tars-desktop:dev: port: 1212,
ui-tars-desktop:dev: rendererUrl: 'http://localhost:5173'
ui-tars-desktop:dev: }
ui-tars-desktop:dev: 16:00:28.693 (main) › isAccessibilityEnabled true
ui-tars-desktop:dev: 16:00:28.699 (main) › Has asked permissions? true
ui-tars-desktop:dev: 16:00:28.713 (main) › Has permissions? true
ui-tars-desktop:dev: 16:00:28.714 (main) › Has asked permissions? true
ui-tars-desktop:dev: 16:00:28.727 (main) › [accessibilityStatus] authorized
ui-tars-desktop:dev: 16:00:28.741 (main) › [ensurePermissions] hasScreenRecordingPermission true hasAccessibilityPermission true
ui-tars-desktop:dev: 16:00:28.742 (main) › ensureScreenCapturePermission { screenCapture: true, accessibility: true }
ui-tars-desktop:dev: 16:00:28.749 (main) › createTray
ui-tars-desktop:dev: 16:00:28.784 (main) › [UTIO] endpoint:
ui-tars-desktop:dev: 16:00:28.785 (main) › createMainWindow
ui-tars-desktop:dev: 16:00:28.785 (main) › [createWindow]: routerPath: / config: {
ui-tars-desktop:dev: show: false,
ui-tars-desktop:dev: width: 430,
ui-tars-desktop:dev: height: 580,
ui-tars-desktop:dev: movable: true,
ui-tars-desktop:dev: alwaysOnTop: false,
ui-tars-desktop:dev: webPreferences: {
ui-tars-desktop:dev: preload: '/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/preload/index.js',
ui-tars-desktop:dev: sandbox: false,
ui-tars-desktop:dev: webSecurity: true
ui-tars-desktop:dev: },
ui-tars-desktop:dev: titleBarStyle: 'hiddenInset',
ui-tars-desktop:dev: trafficLightPosition: { x: 16, y: 16 },
ui-tars-desktop:dev: visualEffectState: 'active',
ui-tars-desktop:dev: vibrancy: 'under-window',
ui-tars-desktop:dev: transparent: true
ui-tars-desktop:dev: }
ui-tars-desktop:dev: renderer url http://localhost:5173
ui-tars-desktop:dev: mainWindowBounds { x: 745, y: 194, width: 430, height: 580 }
ui-tars-desktop:dev: 16:00:28.846 (main) › [createWindow]: routerPath: #settings/ config: {
ui-tars-desktop:dev: show: false,
ui-tars-desktop:dev: width: 480,
ui-tars-desktop:dev: height: 600,
ui-tars-desktop:dev: movable: true,
ui-tars-desktop:dev: alwaysOnTop: true,
ui-tars-desktop:dev: webPreferences: {
ui-tars-desktop:dev: preload: '/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/preload/index.js',
ui-tars-desktop:dev: sandbox: false,
ui-tars-desktop:dev: webSecurity: true
ui-tars-desktop:dev: },
ui-tars-desktop:dev: titleBarStyle: 'hiddenInset',
ui-tars-desktop:dev: trafficLightPosition: { x: 16, y: 16 },
ui-tars-desktop:dev: visualEffectState: 'active',
ui-tars-desktop:dev: vibrancy: 'under-window',
ui-tars-desktop:dev: transparent: true,
ui-tars-desktop:dev: x: 720,
ui-tars-desktop:dev: y: 184,
ui-tars-desktop:dev: resizable: false
ui-tars-desktop:dev: }
ui-tars-desktop:dev: renderer url http://localhost:5173
ui-tars-desktop:dev: 16:00:28.858 (main) › update-electron-app config looks good; aborting updates since app is in development mode
ui-tars-desktop:dev: 16:00:28.858 (main) › mainZustandBridge
ui-tars-desktop:dev: 16:00:28.858 (main) › initializeApp end
ui-tars-desktop:dev: 16:00:28.859 (main) › TypeError: installExtension is not a function
ui-tars-desktop:dev: at /Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:88437:25
ui-tars-desktop:dev: 16:00:28.863 (main) › app.whenReady end
ui-tars-desktop:dev: 2025-03-26 16:00:29.381 Electron[6083:4117385] +[IMKClient subclass]: chose IMKClient_Modern
ui-tars-desktop:dev: 2025-03-26 16:00:29.381 Electron[6083:4117385] +[IMKInputSession subclass]: chose IMKInputSession_Modern
ui-tars-desktop:dev: 16:00:29.702 (main) › Has asked permissions? true
ui-tars-desktop:dev: 16:00:29.717 (main) › Has permissions? true
ui-tars-desktop:dev: 16:00:29.717 (main) › Has asked permissions? true
ui-tars-desktop:dev: 16:00:29.718 (main) › [accessibilityStatus] authorized
ui-tars-desktop:dev: 16:00:29.732 (main) › [ensurePermissions] hasScreenRecordingPermission true hasAccessibilityPermission true
ui-tars-desktop:dev: [getEnsurePermissions] ensurePermissions { screenCapture: true, accessibility: true }
ui-tars-desktop:dev: 16:00:41.999 (main) › SettingStore: {"language":"en","vlmProvider":"vLLM","vlmBaseUrl":"http://localhost:8000/v1/","vlmApiKey":"","vlmModelName":"/Users/shaunxu/huggingface/UI-TARS-7B-DPO","reportStorageBaseUrl":"","utioBaseUrl":""} changed to {"language":"en","vlmProvider":"vLLM","vlmBaseUrl":"http://localhost:8000/v1/","vlmApiKey":"","vlmModelName":"/Users/shaunxu/huggingface/UI-TARS-2B-SFT","reportStorageBaseUrl":"","utioBaseUrl":""}
ui-tars-desktop:dev: 16:01:00.653 (main) › runAgent
ui-tars-desktop:dev: 2025-03-26 16:01:00.671 Electron[6083:4117385] NSWindow does not support nonactivating panel styleMask 0x80
ui-tars-desktop:dev: 2025-03-26 16:01:00.671 Electron[6083:4117385] NSWindow does not support nonactivating panel styleMask 0x80
ui-tars-desktop:dev: 16:01:00.678 (main) › [UTIO] endpoint:
ui-tars-desktop:dev: 16:01:00.679 (main) › [status] running 0
ui-tars-desktop:dev: 16:01:00.679 (main) › ======data======
ui-tars-desktop:dev: null null {} running
ui-tars-desktop:dev: ========
ui-tars-desktop:dev: [run_data_status] running
ui-tars-desktop:dev: 16:01:00.680 (main) › [screenshot] [primaryDisplay] logicalSize: { width: 1920, height: 1080 } scaleFactor: 1
ui-tars-desktop:dev: 16:01:01.180 (main) › [status] running 1
ui-tars-desktop:dev: 16:01:01.181 (main) › ======data======
ui-tars-desktop:dev: null {
ui-tars-desktop:dev: size: { width: 1920, height: 1080 },
ui-tars-desktop:dev: mime: 'image/jpeg',
ui-tars-desktop:dev: scaleFactor: 1
ui-tars-desktop:dev: } {
ui-tars-desktop:dev: from: 'human',
ui-tars-desktop:dev: value: '<image>',
ui-tars-desktop:dev: timing: { start: 1742976060680, end: 1742976061180, cost: 500 }
ui-tars-desktop:dev: } running
ui-tars-desktop:dev: ========
ui-tars-desktop:dev: sysctlbyname for kern.hv_vmm_present failed with status -116:06:35.476 (main) › [UITarsModel cost]: 334037ms
ui-tars-desktop:dev: Failed to parse action '!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!': Error: Not a function call
ui-tars-desktop:dev: 16:06:35.477 (main) › [GUIAgent Response]: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
ui-tars-desktop:dev: 16:06:35.478 (main) › GUIAgent Parsed Predictions: [{"reflection":null,"thought":"","action_type":"","action_inputs":{}}]
ui-tars-desktop:dev: 16:06:35.478 (main) › [status] running 1
ui-tars-desktop:dev: 16:06:35.479 (main) › ======data======
ui-tars-desktop:dev: [
ui-tars-desktop:dev: { reflection: null, thought: '', action_type: '', action_inputs: {} }
ui-tars-desktop:dev: ] { size: { width: 1920, height: 1080 }, scaleFactor: 1 } {
ui-tars-desktop:dev: from: 'gpt',
ui-tars-desktop:dev: value: '!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!',
ui-tars-desktop:dev: timing: { start: 1742976060680, end: 1742976395478, cost: 334798 }
ui-tars-desktop:dev: } running
ui-tars-desktop:dev: ========
ui-tars-desktop:dev: 2025-03-26 16:06:35.490 Electron[6083:4117385] NSWindow does not support nonactivating panel styleMask 0x80
ui-tars-desktop:dev: 2025-03-26 16:06:35.490 Electron[6083:4117385] NSWindow does not support nonactivating panel styleMask 0x80
ui-tars-desktop:dev: 16:06:35.495 (main) › GUIAgent Action:
ui-tars-desktop:dev: 16:06:35.496 (main) › GUIAgent Action Inputs: {}
ui-tars-desktop:dev: 16:06:35.496 (main) › [NutjsOperator] execute 1
ui-tars-desktop:dev: 16:06:35.496 (main) › [NutjsOperator Position]: (null, null)
ui-tars-desktop:dev: 16:06:35.497 (main) › Unsupported action:
ui-tars-desktop:dev: [run_data_status] running
ui-tars-desktop:dev: 16:06:35.497 (main) › [screenshot] [primaryDisplay] logicalSize: { width: 1920, height: 1080 } scaleFactor: 1
ui-tars-desktop:dev: 16:06:35.913 (main) › [status] running 1
ui-tars-desktop:dev: 16:06:35.914 (main) › ======data======
ui-tars-desktop:dev: null {
ui-tars-desktop:dev: size: { width: 1920, height: 1080 },
ui-tars-desktop:dev: mime: 'image/jpeg',
ui-tars-desktop:dev: scaleFactor: 1
ui-tars-desktop:dev: } {
ui-tars-desktop:dev: from: 'human',
ui-tars-desktop:dev: value: '<image>',
ui-tars-desktop:dev: timing: { start: 1742976395497, end: 1742976395913, cost: 416 }
ui-tars-desktop:dev: } running
ui-tars-desktop:dev: ========
ui-tars-desktop:dev: 16:06:36.488 (main) › [UITarsModel] error Error: 400 status code (no body)
ui-tars-desktop:dev: at APIError.generate (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62125:14)
ui-tars-desktop:dev: at OpenAI.makeStatusError (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62925:21)
ui-tars-desktop:dev: at OpenAI.makeRequest (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62969:24)
ui-tars-desktop:dev: at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ui-tars-desktop:dev: at async UITarsModel.invokeModelProvider (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66902:20)
ui-tars-desktop:dev: at async UITarsModel.invoke (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66927:20)
ui-tars-desktop:dev: at async retries._retry_model (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67108:28)
ui-tars-desktop:dev: at async GUIAgent.run (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67106:51)
ui-tars-desktop:dev: at async /Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:88216:5
ui-tars-desktop:dev: at async hideWindowBlock (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:14693:20)
ui-tars-desktop:dev: 16:06:36.488 (main) › [UITarsModel cost]: 37ms
ui-tars-desktop:dev: 16:06:38.170 (main) › [UITarsModel] error Error: 400 status code (no body)
ui-tars-desktop:dev: at APIError.generate (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62125:14)
ui-tars-desktop:dev: at OpenAI.makeStatusError (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62925:21)
ui-tars-desktop:dev: at OpenAI.makeRequest (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62969:24)
ui-tars-desktop:dev: at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ui-tars-desktop:dev: at async UITarsModel.invokeModelProvider (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66902:20)
ui-tars-desktop:dev: at async UITarsModel.invoke (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66927:20)
ui-tars-desktop:dev: at async retries._retry_model (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67108:28)
ui-tars-desktop:dev: at async GUIAgent.run (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67106:51)
ui-tars-desktop:dev: at async /Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:88216:5
ui-tars-desktop:dev: at async hideWindowBlock (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:14693:20)
ui-tars-desktop:dev: 16:06:38.170 (main) › [UITarsModel cost]: 15ms
ui-tars-desktop:dev: 16:06:41.460 (main) › [UITarsModel] error Error: 400 status code (no body)
ui-tars-desktop:dev: at APIError.generate (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62125:14)
ui-tars-desktop:dev: at OpenAI.makeStatusError (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62925:21)
ui-tars-desktop:dev: at OpenAI.makeRequest (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62969:24)
ui-tars-desktop:dev: at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ui-tars-desktop:dev: at async UITarsModel.invokeModelProvider (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66902:20)
ui-tars-desktop:dev: at async UITarsModel.invoke (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66927:20)
ui-tars-desktop:dev: at async retries._retry_model (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67108:28)
ui-tars-desktop:dev: at async GUIAgent.run (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67106:51)
ui-tars-desktop:dev: at async /Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:88216:5
ui-tars-desktop:dev: at async hideWindowBlock (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:14693:20)
ui-tars-desktop:dev: 16:06:41.461 (main) › [UITarsModel cost]: 17ms
ui-tars-desktop:dev: 16:06:47.903 (main) › [UITarsModel] error Error: 400 status code (no body)
ui-tars-desktop:dev: at APIError.generate (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62125:14)
ui-tars-desktop:dev: at OpenAI.makeStatusError (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62925:21)
ui-tars-desktop:dev: at OpenAI.makeRequest (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62969:24)
ui-tars-desktop:dev: at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ui-tars-desktop:dev: at async UITarsModel.invokeModelProvider (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66902:20)
ui-tars-desktop:dev: at async UITarsModel.invoke (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66927:20)
ui-tars-desktop:dev: at async retries._retry_model (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67108:28)
ui-tars-desktop:dev: at async GUIAgent.run (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67106:51)
ui-tars-desktop:dev: at async /Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:88216:5
ui-tars-desktop:dev: at async hideWindowBlock (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:14693:20)
ui-tars-desktop:dev: 16:06:47.903 (main) › [UITarsModel cost]: 17ms
ui-tars-desktop:dev: 16:06:47.904 (main) › [GUIAgent] run error Error: 400 status code (no body)
ui-tars-desktop:dev: at APIError.generate (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62125:14)
ui-tars-desktop:dev: at OpenAI.makeStatusError (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62925:21)
ui-tars-desktop:dev: at OpenAI.makeRequest (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62969:24)
ui-tars-desktop:dev: at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ui-tars-desktop:dev: at async UITarsModel.invokeModelProvider (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66902:20)
ui-tars-desktop:dev: at async UITarsModel.invoke (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66927:20)
ui-tars-desktop:dev: at async retries._retry_model (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67108:28)
ui-tars-desktop:dev: at async GUIAgent.run (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67106:51)
ui-tars-desktop:dev: at async /Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:88216:5
ui-tars-desktop:dev: at async hideWindowBlock (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:14693:20)
ui-tars-desktop:dev: 16:06:47.904 (main) › [runAgent error] {
ui-tars-desktop:dev: language: 'en',
ui-tars-desktop:dev: vlmProvider: 'vLLM',
ui-tars-desktop:dev: vlmBaseUrl: 'http://localhost:8000/v1/',
ui-tars-desktop:dev: vlmApiKey: '',
ui-tars-desktop:dev: vlmModelName: '/Users/shaunxu/huggingface/UI-TARS-2B-SFT',
ui-tars-desktop:dev: reportStorageBaseUrl: '',
ui-tars-desktop:dev: utioBaseUrl: ''
ui-tars-desktop:dev: } {
ui-tars-desktop:dev: code: -1,
ui-tars-desktop:dev: error: 'GUIAgent Service Error',
ui-tars-desktop:dev: stack: 'Error: 400 status code (no body)'
ui-tars-desktop:dev: }
ui-tars-desktop:dev: 16:06:47.905 (main) › [status] end 0
ui-tars-desktop:dev: 16:06:47.905 (main) › ======data======
ui-tars-desktop:dev: null null {} end
ui-tars-desktop:dev: ========
ui-tars-desktop:dev: 16:06:47.907 (main) › [GUIAgent] finally: status end
ui-tars-desktop:dev: 16:06:47.907 (main) › [runAgentLoop error] Error: 400 status code (no body)
ui-tars-desktop:dev: at APIError.generate (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62125:14)
ui-tars-desktop:dev: at OpenAI.makeStatusError (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62925:21)
ui-tars-desktop:dev: at OpenAI.makeRequest (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:62969:24)
ui-tars-desktop:dev: at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ui-tars-desktop:dev: at async UITarsModel.invokeModelProvider (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66902:20)
ui-tars-desktop:dev: at async UITarsModel.invoke (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:66927:20)
ui-tars-desktop:dev: at async retries._retry_model (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67108:28)
ui-tars-desktop:dev: at async GUIAgent.run (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:67106:51)
ui-tars-desktop:dev: at async /Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:88216:5
ui-tars-desktop:dev: at async hideWindowBlock (/Users/shaunxu/github/UI-TARS-desktop/apps/ui-tars/dist/main/main.js:14693:20)
ui-tars-desktop:dev: 16:23:49.245 (main) › log file cleared
Tasks: 1 successful, 1 total
Cached: 0 cached, 1 total
Time: 23m25.272s
Thank you for your feedback! We’d like to share that there are currently a significant number of issues we’ve identified, and we are diligently working to address them as quickly and thoroughly as possible.
Regarding the problem you reported, We have these two solutions to help you try to solve it yourself first:
- Solution 1: Try to
debugGUIAgent under the app by yourself in combination with our contribution guide. - Solution 2: If solution 1 won't work, this may once again prove the instability of local deployment, you use the UI-TARS model deployed by Cloud for testing.
If you still encounter any problems, please feel free to continue communicating.
你本地M1 能启动成功吗,为啥我这启动就失败,无法失败的设备:
File "/Users/a58/miniconda3/lib/python3.12/site-packages/vllm/config.py", line 1527, in init raise RuntimeError("Failed to infer device type") RuntimeError: Failed to infer device type