Peng Jiang
Peng Jiang
A community user encountered this error as well.
Please access port 80 instead of 8080. Refer to https://github.com/gpustack/gpustack/issues/3519
Could you provide more information including: 1. The complete error screenshot of the playground. 2. The log of the GPUStack container. 3. The log of the model instance you accessed....
@yanhelin, please provide the log file if the issue persists.
DeepSeek-OCR is only supported in vLLM 0.11.1+. The vllm version in GPUStack v0.7.1 is v0.10.1.1. For an online GPUStack deployment, please edit the model deployment - advanced and input v0.11.1...
Although vllm supports GGUF format, in GPUStack we only support llama-box with GGUF models. Besides the GPU utilization, did you see any other difference (total time cost for the same...
Please upgrade to v0.7.1. The default timeout has been changed to 1800s and you can adjust it by using GPUSTACK_PROXY_TIMEOUT_SECONDS environment variable.
It only works in v0.7.1 and above. Please upgrade to v0.7.1 or v2.0.0 first.
Please stop the GPUStack container first, then run this script to check for port conflicts. [check-gateway-port.sh](https://github.com/user-attachments/files/23785814/check-gateway-port.sh)
Could you provide all the logs in your /log directory? @orangedeng, please help take a look.