openui
openui copied to clipboard
Ollama NOT working localy, codespace, docker
the main problem is that OPENAI package is TAKING priority over ollama. keep saying API key all projects that using openai official package have this issue with ollama.
https://github.com/wandb/openui/issues/18#issuecomment-2034503485
please add Ollama server option to force it in OpenUI setting. by default its not checking for ollama just says OPENAI KEY missing/wrong
Hmm, I just tried ollama in Codespaces and it's working for me, I see this in the settings dialog:
It sounds like what you're running into is the requirement for an OPENAI_API_KEY? I mentioned this in the README, but you can just set that to xxx
, i.e. OPENAI_API_KEY=xxx python -m openui
. Let me know if that doesn't work or you're having other issues with Ollama.
i cannot access Ollama either
i have set it to xxx in the docker run command
latest update working, thank you.(on codespace is very slow, as usual 👍 :)
inside terminal pull models for ollama first run this export OPENAI_API_KEY=xxx then python -m openui . in main folder
I did the local installations of Ollama with codellama model. I did docker setup, but it was not able to connect to Ollama server to fetch the list of models. I used venv and installed the packages locally, and ran the server locally.
I got the models list from Ollama. But, the completions api (/v1/chat/completions
) failed with 500 Internal Server Error.
The Ollama package's ollama.chat(**data)
failed with 404 error, it was pointing to /api/chat
endpoint. Due to that I got runtime error RuntimeError: Attempted to call a sync iterator on an async stream
in the response in the browser.
Did anyone faced this?
for docker get into container terminal and run this export OPENAI_API_KEY=xxx Kill ollma then re run ollama serve then try again.
i got 500 error when it could not find any OAI key. or taking too long ollama to respond i recommend running on gpu insted of cpu for faster response.
note for dev. increase reply timeout when ollama is active.
@hirenchauhan2 and @BeeTwenty those errors look like potentially a ollama compatibility issue. Can you verify you're running a fairly recent version of Ollama or share what version you have? You can get it by running:
ollama --version
Hey guys, I just updated the README with instructions for running via Docker Compose. That might be easiest all be it slow.
@vanpelt Yeah, it was compatibility issue. I was using older version of Ollama 0.1.1
. I updated to newer version 0.1.30
and it started working. I don't have GPU on my machine, so it's slow which is understandable, but I keep getting the timeout error. Is this set in backend or frontend? I haven't checked the code yet hence asking this timeout thing. If we can remove the timeouts just for Ollama it would be better.