Lei
Lei
Thanks for your reply, I’ll give it a try later.
@XingWang1234 Install vllm via `pip install vllm` and start the model with Python: ```bash CUDA_VISIBLE_DEVICES=0,1 nohup python -m vllm.entrypoints.openai.api_server \ --served-model-name ui-tars \ --model bytedance-research/UI-TARS-1.5-7B \ --port 10006 \ --tensor-parallel-size...
@JjjFangg 请问用 vllm 启动 ByteDance-Seed/UI-TARS-1.5-7B,有什么建议吗? 目前也是遇到了定位不准的问题。
@thuqinyj16 When will the new version of UI-TARS be released?