start tabby serve on docker desktop, no logs,and cannot open http://localhost:9090/
Describe the bug start tabby serve on docker desktop, no logs,and cannot open http://localhost:9090/
Information about your version
Please provide output of tabby --version
tabby 0.12.0
Information about your GPU
Please provide output of nvidia-smi
run cpu
Additional context
Add any other context about the problem here.
tabby 0.12.0-dev.0 work fine
I have faced the same issue running tabby with CPU-only capabilities with the verbatim command off the documentation. The image used was tabbyml/tabby:latest (automatically pulled by docker, created 11 days ago).
Turning on debug logging and running:
docker run -e RUST_LOG=debug -e RUST_BACKTRACE=1 -it -p 8080:8080 -v ~/.tabby:/data tabbyml/tabby serve --model TabbyML/DeepseekCoder-1.3B --chat-model TabbyML/WizardCoder-3B
appears to be in an unending loop:
2024-06-27T14:00:44.245254Z DEBUG hyper_util::client::legacy::connect::http: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-util-0.1.5/src/client/legacy/connect/http.rs:631: connecting to 127.0.0.1:30888 2024-06-27T14:00:44.245269Z DEBUG reqwest::connect: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/reqwest-0.12.4/src/connect.rs:497: starting new connection: http://127.0.0.1:30888/ 2024-06-27T14:00:44.245271Z DEBUG hyper_util::client::legacy::connect::http: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-util-0.1.5/src/client/legacy/connect/http.rs:631: connecting to 127.0.0.1:30888 2024-06-27T14:00:44.245286Z DEBUG reqwest::connect: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/reqwest-0.12.4/src/connect.rs:497: starting new connection: http://127.0.0.1:30888/
(this will repeat for many minutes -- no output is produced without debug on)
version 0.12.0 under linux (CPU)
Confirmed that running the same models and tabby version, but with GPU (cuda) enabled works fine.
same here.
docker-compose hangs (after successfully fetching):
WARN[0000] /home/user/tabby/docker-compose.yml: the attribute version is obsolete, it will be ignored, please remove it to avoid potential confusion [+] Running 1/0 ✔ Container tabby-tabby-1 Created 0.0s Attaching to tabby-1
Need to press Ctrl+c twice to force stop.
running from shell script shows repeated, never ending error message from llama:
⠸ 417.481 s Starting...2024-08-23T06:55:45.417512Z WARN llama_cpp_server::supervisor: crates/llama-cpp-server/src/supervisor.rs:96: llama-server <embedding> exited with status code 127 2024-08-23T06:55:45.417524Z WARN llama_cpp_server::supervisor: crates/llama-cpp-server/src/supervisor.rs:108: <embedding>: /opt/tabby/bin/llama-server: error while loading shared libraries: libcuda.so.1: cannot open shared object file: No such file or directory
So obviously the docker image tries to run the cuda version, but I'm using the cpu version according to the docs on the homepage.
For cpu use case, use binary distribution instead. Docker image is only meant to used for cuda environment.