Offline environment tabby freezes without any output
Describe the bug I would like to use Tabby in an offline environment. However, when I disconnect from the internet and run the command TABBY_ROOT=/tabby_home ./tabby serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct, the program freezes without any output. But once I reconnect to the internet, the program starts up normally. Is it necessary for the Tabby program to access the network every time it is launched? And is there any way to prevent this from happening? I have already downloaded the Tabby models files in the tabby_home directory.
Information about your version tabby 0.21.1
Information about your GPU CPU
Hi @magiccaptain, Tabby can operate in an offline environment. However, by default, when you specify a model name in the command arguments, Tabby will attempt to fetch the model from the Tabby Registry online.
Here are two options to consider:
-
Follow the instructions at https://tabby.tabbyml.com/docs/references/models-http-api/llama.cpp/ to use an HTTP model, preventing Tabby from downloading one.
-
Copy the existing
~/.tabby/models/TabbyML/models.jsonand the three specified model directories~/.tabby/models/TabbyML/${MODEL_NAME}to the offline environment.
After completing one of these steps, you can run Tabby offline.
try to blacklist github.com and raw.githubusercontent.com in your /etc/hosts by adding following line before you run tabby 0.0.0.0 github.com raw.githubusercontent.com
Hi @topcheer, I've conducted the test you mentioned, and it works on my end:
I also performed a test on Linux with the ethernet disabled, and it worked as well.
I installed Tabby in an online environment, and it worked properly. Then I copied the Tabby executable and the .tabby folder to an offline Linux system. However, it reported an "Invalid 'model_id'" error. The error message is as follows:
The application panicked(crashed).
Message: Invalid model_id <TabbyML/starCode-1B>;please consult https://github.com/TabbyML/registry-tabby for the correct model_id
Locaition: crates/tabby-common/sc/registry.rs:168
Information about your version
tabby v0.24.0-rc.1 tabby_x86_64-manylinux_2_28.tar.gz
Information about your GPU CPU
Also running tabby serve --device metal --model StarCoder-1B --port 1234 with models previously downloaded:
$ ls ~/.tabby/models/TabbyML
Nomic-Embed-Text StarCoder-1B models.json
I'm using an outgoing firewall, and I'm not able to run the model fully offline.
I tried to disable data collection with export TABBY_DISABLE_USAGE_COLLECTION=1
When I allow outgoing request it solves the problem instantly.
Why is a connection needed ?
Another work around: set an impossible proxy after required model downloaded.
Tabby Version 0.25.0
#!/usr/bin/env bash
# launch_tabby.sh
host=127.0.0.1
port=2580
if [ "x$1" = "xoffline" ]; then
# set an impossible proxy
export http_proxy=http://127.0.0.99:1
export https_proxy=http://127.0.0.99:1
# whitelist, somehow 127.0.0.1 must be included to make completion api work(chat is fine)
export no_proxy=127.0.0.1,localhost
fi
# for first run, an embed model is downloaded
exec ./tabby_bin/tabby serve --host "$host" --port "$port" --parallelism 1
If I remember it right, the issue also exists in very early versions of tabby, and occurs when network is not stable(i.e. high latency, high packet loss)
I retried in my Linux env with v0.26.0, it still works as expected
An idea occurred to me regarding your network limitation policy. Do your policies result in the direct dropping of outgoing packets, potentially leading to network timeouts and extended wait times for replies if Tabby attempts to send out network packets?
I have conducted the package drop validation, and it appears that Tabby continues to function as expected despite this.