tabby icon indicating copy to clipboard operation
tabby copied to clipboard

Tabby exits with the exit code 1 without any errors

Open yurivict opened this issue 1 year ago • 12 comments

Describe the bug It exits at this point:

📄 Version 0.11.1
🚀 Listening at 0.0.0.0:8080


  JWT secret is not set

  Tabby server will generate a one-time (non-persisted) JWT secret for the current process.
  Please set the TABBY_WEBSERVER_JWT_TOKEN_SECRET environment variable for production usage.

Information about your version 0.11.1

FreeBSD 14.0

yurivict avatar May 17 '24 09:05 yurivict

Hi - since we're not distributing the freebsd binary, could you confirm you're builiding from scratch, with v0.11.1 tag?

wsxiaoys avatar May 17 '24 15:05 wsxiaoys

Yes, this is the build from scratch for the v0.11.1 tag.

The build was performed in the FreeBSD port's framework.

yurivict avatar May 17 '24 18:05 yurivict

I also tried to use the Tabby plugin from vim, but I couldn't get any code suggestions. The help page says that the suggestion is supposed to appear when you stop typing. But this didn't happen.

I used this command line: $ tabby serve --model TabbyML/StarCoder-1B

Perhaps it attempts to use GPU while GPU isn't available? Might this be the cause? Is there a compile-time or run-time switch to use only CPU?

I couldn't easily find in the docs how to only enable CPU inference.

yurivict avatar May 17 '24 18:05 yurivict

Is there a compile-time or run-time switch to use only CPU?

I couldn't easily find in the docs how to only enable CPU inference.

--device <DEVICE>            Device to run model inference [default: cpu] [possible values: cpu, metal]

Setting --device cpu should make it use only CPU I guess?

mblarsen avatar May 25 '24 05:05 mblarsen

Ok, so I was using the default - CPU.

It is still unclear why does it exit w/out errors.

yurivict avatar May 25 '24 06:05 yurivict

I've got the same on Fedora Linux on v0.11.1, exits with "Error 132" and no verbose messsage. I tried to recompile with CUDA 12.4.1 - the interface is up but the isse #2263 happens.

v0.11.1 exists without any delay. With device CPU and GPU.

You should give more information in STDERR IMHO.

metal3d avatar May 28 '24 13:05 metal3d

Same issue with latest unstable nixos tabby v0.11.1

model or rocm seem to not change the exit 1 without error

 services.tabby = {                                                                                    
    enable = true;                                                                                      
    acceleration = "cpu";                                                                               
    model = "TabbyML/DeepseekCoder-1.3B";                                                               
};   

arnm avatar Jun 01 '24 05:06 arnm

It seems to be broken in general, not specific to any OS.

@wsxiaoys Any chance to get it fixed?

yurivict avatar Jun 01 '24 05:06 yurivict

Hi, please share more information (e.g set RUST_LOG=debug, docker image tag or release page link) to help troubleshooting, thanks.

wsxiaoys avatar Jun 01 '24 06:06 wsxiaoys

I was able to rollback to tabby v0.11.0 and cpu with default model is working. Trying with rocm now. Tried v0.8.3 and v0.10.0 and couldn't get them to work first try, might have been an issue with the files v0.11.1 had already created, not quite sure.

  services.tabby =
    let
      tabby_0_11_0 = (import
        (builtins.fetchGit {
          name = "tabby_0_11_0";
          url = "https://github.com/NixOS/nixpkgs/";
          ref = "refs/heads/nixpkgs-unstable";
          # rev = "e89cf1c932006531f454de7d652163a9a5c86668"; #0.8.3
          # rev = "a064513ad395d680ec3d5f56abc4ed30c23150ee"; # 0.10.0
          rev = "3e1464aff56e5c26996e974a0a5702357a01a127"; # 0.11.0
        })
        { system = "x86_64-linux"; }).pkgs.tabby;
    in
    {
      enable = true;
      package = tabby_0_11_0;
    };

arnm avatar Jun 01 '24 07:06 arnm

Hi, please share more information (e.g set RUST_LOG=debug, docker image tag or release page link) to help troubleshooting, thanks.

Hi @wsxiaoys , for me, tabby stops immediately if I run it with --chat-model TabbyML/Deepseek-V2-Lite-Chat:

tabby-1  | 2024-06-04T09:41:03.358298Z ERROR llama_cpp_bindings: crates/llama-cpp-bindings/src/lib.rs:61: Unable to load model: /data/models/TabbyML/Deepseek-V2-Lite-Chat/ggml/model.gguf

It works if I remove the chat-model option.

lirc571 avatar Jun 04 '24 09:06 lirc571

Hi @lirc571 - DeepseekV2 Lite support is added in 0.12 (Currently in rc). It's not supported in 0.11

wsxiaoys avatar Jun 04 '24 10:06 wsxiaoys