Can't use local LLM in UI.TARS-0.1.0 Windows version
I'm trying to use my local model but i can't save without providing a VLM API Key
Try entering "empty" in the VLM API key field and see if that works.
I'm trying to use my local model but i can't save without providing a VLM API Key
Which llm client are you using?
I tried both Llama.cpp and LM Studio, they managed to launch the llm, but the UI-TARS couden't communicate with them. LM Studio screenshot.
I'm trying to use my local model but i can't save without providing a VLM API Key
Which llm client are you using?
I tried both Llama.cpp and LM Studio, they managed to launch the llm, but the UI-TARS couden't communicate with them. LM Studio screenshot.
Ollama and lmstudio uses gguf version without vision capability (likely this one mradermacher/UI-TARS-1.5-7B-GGUF). Tried to quant it by myself, but even so, llama.cpp can’t create gguf of Qwen2_5_VL arch with vision. So only success ive got, is by running vllm with bnb quant on-the-fly (from original model, bnb quant from hf does not works either). But cause ive runned it from wsl, 8gb ram isnt enough and cpu offload is not supported. Right now im trying to make awq quant on cpu and run it with lmdeploy, but will likely fail. So, we can only wait (maybe a year), or use vllm with a bnb/full-precision model