llama-gpt
llama-gpt copied to clipboard
UI is not working / chatbot-ui does not start
Hello!
Sorry for the dump question maybe, I just found llama-gpt, and this is the very first try to run a gpt model locally.
Trying it on my M1 Pro macbook
I run run-mac.sh, it downloaded everything, and I run the server successfully:
Uvicorn running on http://localhost:3001
If I go to localhost:3001/docs# - I can see API interface
When I go to localhost:3001 itself, I have a response
{"detail":"Not Found"}
And in log
INFO: ::1:55660 - "GET / HTTP/1.1" 404 Not Found
I tried to install chatbot-ui externally, but was not able to connect it with this server.
From the documentation I suppose that chatbot-ui should be installed, but I can't find it running anywhere.
So how I can see it running actually ?
On my machine, using the run-mac.sh
script, the API runs on port 3001
while the UI is on port 3000
Getting the same issue. Logs: INFO: Started server process [16475] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:3001 (Press CTRL+C to quit) INFO: ::1:64888 - "GET / HTTP/1.1" 404 Not Found INFO: ::1:64888 - "GET /favicon.ico HTTP/1.1" 404 Not Found
The chatbot-ui is running on http://localhost:3000 not http://localhost:3001
The chatbot-ui is running on http://localhost:3000 not http://localhost:3001
It is not getting up during the run-mac.sh
script running
I have the same issue no UI
I tried port3000, not working. I modified run-mac.sh, set port number to 3003 or whatever. It doesn't help. The error message is
INFO: Uvicorn running on http://localhost:3002 (Press CTRL+C to quit)
INFO: ::1:64735 - "GET / HTTP/1.1" 404 Not Found
INFO: ::1:64735 - "GET /favicon.ico HTTP/1.1" 404 Not Found
I tried port3000, not working. I modified run-mac.sh, set port number to 3003 or whatever. It doesn't help. The error message is
INFO: Uvicorn running on http://localhost:3002 (Press CTRL+C to quit) INFO: ::1:64735 - "GET / HTTP/1.1" 404 Not Found INFO: ::1:64735 - "GET /favicon.ico HTTP/1.1" 404 Not Found
getting the same error
I think it is related to https://github.com/abetlen/llama-cpp-python/issues/520 There is name space conflict and one of the depending containers exit with an error code so the current version is broken right now.
Make sure your have Docker installed and running. I initially overlooked that, once I started Docker, chatbot-ui
is running in a container fine.
brew install homebrew/cask/docker brew install docker-composer open run-mac.sh, replace all the " docker compose" words to "docker-compose" sudo chown [your mac login user]:staff /Users/yee/Library/Caches/pip
Still unable to work {"detail":"Not Found"}
@labolado
What happens when you run docker ps
in Terminal?
You should see something similar to this output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21b91af3d69a llama-gpt-llama-gpt-ui-mac "docker-entrypoint.s…" 2 days ago Up 2 days 0.0.0.0:3000->3000/tcp llama-gpt-llama-gpt-ui-mac-1
If you don't see it running, check out this https://www.docker.com/blog/how-to-fix-and-debug-docker-containers-like-a-superhero/
What may be happening:
@ericpardee I have managed to get it going, whilst initially having the same issue. My steps were:
- Create "server-mac.sh" bash file, which will contain the server starting script and all needed variables in it
- Run the shell script with
./server-script.sh model=code-13b
- Debug all incoming errors (All of them consisted of modules not found)
My thoughts on what is happening might be that install_llama_cpp_python(){}
block doesn't break the run-mac.sh
execution on encountering errors. Also, the script is either not installing all the needed packages or it is intalling them to incorrect location, when python is installed via pyenv in brew (i.e. brew install pyenv ---> pyenv install python).
I am not skillful in shell scripting, so I am not able to PR the possible solution to the discoverings.
My computer:
- MacOS Sonoma 14.0 on MacBook Pro M2 Pro (16 cores)
- Warp terminal with zsh 5.9v
- Docker version 20.10.22, build 3a2c30b
Outputs of which
:
- /opt/homebrew/bin/python3
- /opt/homebrew/bin/pip3
- /usr/local/bin/brew
- llama-cpp-python in /opt/homebrew/lib/python3.11/site-packages (0.2.13)
@mayankchhabra Could you please take a look into it? git blame
shows you as the main contributor to run-mac.sh
@labolado It is actually a normal response of the server when accessing it via localhost:3001
. When the server responds with anything - the ui shows the dropdown select to choose between available models.