kiran-chinthala
kiran-chinthala
> So you did ollama serve and then tried to run python3 devika.py or reverse? Yes, I did the same,First Ollama serve and then I have started devika server.
> Does > > Ollama run llama2 > > work in your machine? Yes it does, below is the command & its output $ **ollama run llama2** >>> tell me...
> you have to update ollama url if it's not the default one > > > python3 devika.py 24.04.01 18:39:41: root: INFO : Initializing Devika... 24.04.01 18:39:41: root: INFO :...
> Ollama and Devika are on the same network, and can be accessed from the Devika container using curl I agree with your point, if those are on same network...
Thanks Davide for the reply, I have used both the LiteLLM and only Ollama, but the both cases I got similar response. However, I will try with these properties in...
Yes, I just updated to the `litellm==v1.34.22`, I have selected llama2 from the model list, but I am getting this error in the litellm side. below are the logs. INFO:...
> I have made a PR to try and fix this on the backend #689. > > I changed the backend llm so that if the user has `LLM_API_KEY="ollama"` in...
I just tested with latest code pull, getting this error while connecting ollama config.toml LLM_API_KEY="ollama" WORKSPACE_DIR="./workspace" LLM_BASE_URL="http://localhost:4000" LLM_MODEL="ollama/llama2" LLM_EMBEDDING_MODEL="llama2" after backend server starts then I am getting this error, I...