llama-gpt icon indicating copy to clipboard operation
llama-gpt copied to clipboard

UI is not working / chatbot-ui does not start

Open alpiua opened this issue 1 year ago • 14 comments

Hello!

Sorry for the dump question maybe, I just found llama-gpt, and this is the very first try to run a gpt model locally.

Trying it on my M1 Pro macbook

I run run-mac.sh, it downloaded everything, and I run the server successfully:

Uvicorn running on http://localhost:3001

If I go to localhost:3001/docs# - I can see API interface

When I go to localhost:3001 itself, I have a response

{"detail":"Not Found"}

And in log

INFO: ::1:55660 - "GET / HTTP/1.1" 404 Not Found

I tried to install chatbot-ui externally, but was not able to connect it with this server.

From the documentation I suppose that chatbot-ui should be installed, but I can't find it running anywhere.

So how I can see it running actually ?

alpiua avatar Aug 23 '23 00:08 alpiua

On my machine, using the run-mac.sh script, the API runs on port 3001 while the UI is on port 3000

clncy avatar Aug 23 '23 03:08 clncy

Getting the same issue. Logs: INFO: Started server process [16475] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:3001 (Press CTRL+C to quit) INFO: ::1:64888 - "GET / HTTP/1.1" 404 Not Found INFO: ::1:64888 - "GET /favicon.ico HTTP/1.1" 404 Not Found

AbdurrehmanSubhani avatar Aug 23 '23 03:08 AbdurrehmanSubhani

The chatbot-ui is running on http://localhost:3000 not http://localhost:3001

Bethibande avatar Aug 23 '23 05:08 Bethibande

The chatbot-ui is running on http://localhost:3000 not http://localhost:3001

It is not getting up during the run-mac.sh script running

alpiua avatar Aug 23 '23 10:08 alpiua

I have the same issue no UI

Tooflex avatar Aug 23 '23 18:08 Tooflex

I tried port3000, not working. I modified run-mac.sh, set port number to 3003 or whatever. It doesn't help. The error message is

INFO:     Uvicorn running on http://localhost:3002 (Press CTRL+C to quit)
INFO:     ::1:64735 - "GET / HTTP/1.1" 404 Not Found
INFO:     ::1:64735 - "GET /favicon.ico HTTP/1.1" 404 Not Found

kevingzhang avatar Aug 25 '23 00:08 kevingzhang

I tried port3000, not working. I modified run-mac.sh, set port number to 3003 or whatever. It doesn't help. The error message is

INFO:     Uvicorn running on http://localhost:3002 (Press CTRL+C to quit)
INFO:     ::1:64735 - "GET / HTTP/1.1" 404 Not Found
INFO:     ::1:64735 - "GET /favicon.ico HTTP/1.1" 404 Not Found

getting the same error

AbdurrehmanSubhani avatar Aug 25 '23 05:08 AbdurrehmanSubhani

I think it is related to https://github.com/abetlen/llama-cpp-python/issues/520 There is name space conflict and one of the depending containers exit with an error code so the current version is broken right now.

Nisse123 avatar Sep 06 '23 08:09 Nisse123

Make sure your have Docker installed and running. I initially overlooked that, once I started Docker, chatbot-ui is running in a container fine.

ericpardee avatar Sep 10 '23 01:09 ericpardee

brew install homebrew/cask/docker brew install docker-composer open run-mac.sh, replace all the " docker compose" words to "docker-compose" sudo chown [your mac login user]:staff /Users/yee/Library/Caches/pip

Still unable to work {"detail":"Not Found"}

labolado avatar Sep 12 '23 03:09 labolado

@labolado What happens when you run docker ps in Terminal?

You should see something similar to this output:

CONTAINER ID   IMAGE                        COMMAND                  CREATED      STATUS      PORTS                    NAMES
21b91af3d69a   llama-gpt-llama-gpt-ui-mac   "docker-entrypoint.s…"   2 days ago   Up 2 days   0.0.0.0:3000->3000/tcp   llama-gpt-llama-gpt-ui-mac-1

If you don't see it running, check out this https://www.docker.com/blog/how-to-fix-and-debug-docker-containers-like-a-superhero/

ericpardee avatar Sep 12 '23 18:09 ericpardee

What may be happening:

@ericpardee I have managed to get it going, whilst initially having the same issue. My steps were:

  1. Create "server-mac.sh" bash file, which will contain the server starting script and all needed variables in it
  2. Run the shell script with ./server-script.sh model=code-13b
  3. Debug all incoming errors (All of them consisted of modules not found)

My thoughts on what is happening might be that install_llama_cpp_python(){} block doesn't break the run-mac.sh execution on encountering errors. Also, the script is either not installing all the needed packages or it is intalling them to incorrect location, when python is installed via pyenv in brew (i.e. brew install pyenv ---> pyenv install python).

I am not skillful in shell scripting, so I am not able to PR the possible solution to the discoverings.

My computer:

  1. MacOS Sonoma 14.0 on MacBook Pro M2 Pro (16 cores)
  2. Warp terminal with zsh 5.9v
  3. Docker version 20.10.22, build 3a2c30b

Outputs of which:

  1. /opt/homebrew/bin/python3
  2. /opt/homebrew/bin/pip3
  3. /usr/local/bin/brew
  4. llama-cpp-python in /opt/homebrew/lib/python3.11/site-packages (0.2.13)

chiefkana avatar Nov 03 '23 08:11 chiefkana

@mayankchhabra Could you please take a look into it? git blame shows you as the main contributor to run-mac.sh

chiefkana avatar Nov 03 '23 08:11 chiefkana

@labolado It is actually a normal response of the server when accessing it via localhost:3001. When the server responds with anything - the ui shows the dropdown select to choose between available models.

chiefkana avatar Nov 03 '23 08:11 chiefkana