quivr
quivr copied to clipboard
[Bug]: Can not run with ollama locally
What happened?
I am trying to run with ollama locally.
When I build quivr in docker and first I receive an error regarding rate limitation. Which I assume is due to not using any openai key, but I assumed I would not have too, when running it locally? I can still log in to quivr and the application keeps running so I am not sure weather or not this is a problem.
However when I write a message to a brain it always return an empty message and I receive an error "Failed to resolve 'host.docker.internal'". I have tried to set OLLAMA_API_BASE_URL to http://host.docker.internal:11434, http://localhost:11434, and http://127.0.0.1:11434. But I get errors for all of them.
I am running all code on a remote computer and using the localhost pages on my local machine if that could be relevant.
Relevant log output
# Rate limitation error
supabase-vector | 2023-12-15T12:33:43.446827Z WARN sink{component_kind="sink" component_id=logflare_db component_type=http component_name=logflare_db}:request{request_id=0}:http: vector::internal_events::http_client: HTTP error. error=error trying to connect: dns error: failed to lookup address information: Try again error_type="request_failed" stage="processing" internal_log_rate_limit=true
supabase-vector | 2023-12-15T12:33:43.446879Z WARN sink{component_kind="sink" component_id=logflare_db component_type=http component_name=logflare_db}:request{request_id=0}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: dns error: failed to lookup address information: Try again internal_log_rate_limit=tr
# 'host.docker.internal' error
200 OK"
backend-core | INFO: 172.21.0.1:59162 - "GET /chat HTTP/1.1" 200 OK
backend-core | 2023-12-15 12:31:48,943 [ERROR] llm.knowledge_brain_qa: Caught exception: Error raised by inference endpoint: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7fe316809c50>: Failed to resolve 'host.docker.internal' ([Errno -2] Name or service not known)"))
backend-core | 2023-12-15 12:31:48,943 [ERROR] llm.knowledge_brain_qa: Error processing source documents: 'NoneType' object has no attribute 'get'
supabase-kong | 172.21.0.18 - - [15/Dec/2023:12:31:48 +0000] "PATCH /rest/v1/chat_history?message_id=eq.3db81a10-6b19-4b51-b1f8-faddd2f7338b HTTP/1.1" 200 261 "-" "python-httpx/0.24.1"
Twitter / LinkedIn details
No response
🤖
Hello @CarolineBukk,
Nice to meet you! I'm Dosu, an AI bot assigned to assist with issues and inquiries on this repository. I'm here to help you out while we wait for a human maintainer. Now, let's look into this issue you're facing with Quivr.
The error "Failed to resolve 'host.docker.internal'" typically occurs when you're trying to access the host machine from within a Docker container. This hostname ('host.docker.internal') is a special internal DNS name used by Docker to refer to the host machine. However, this feature is not available on all platforms. It's primarily supported on Docker for Windows and Docker for Mac, but not on Docker for Linux.
In your case, if you're running Quivr with ollama locally on a Linux machine, the hostname 'host.docker.internal' will not be resolved, leading to the error you're seeing.
To fix this issue, you can use the special IP address '172.17.0.1' instead of 'host.docker.internal' to refer to the host machine from within the Docker container. This IP address is typically the default gateway for the Docker network and can be used to access services running on the host machine.
However, please note that this solution might not work in all scenarios, as the default gateway IP can change depending on your Docker network configuration. If '172.17.0.1' doesn't work, you can inspect your Docker network to find the correct IP.
Here's how you can do it:
- Run
docker network inspect bridge
to inspect the default Docker network. - Look for the "Gateway" field in the output. This is the IP you can use instead of 'host.docker.internal'.
Please replace 'host.docker.internal' with the correct IP in your OLLAMA_API_BASE_URL setting and try running Quivr again.
As for the OLLAMA_API_BASE_URL
, it is a configuration variable that sets the base URL for the Ollama API. This API is likely used for some functionality within the Quivr application. In the context of Docker, this environment variable is used to configure the Docker container that runs the Quivr application. When the Docker container is started, it reads the configuration from the .env
file and sets the environment variables accordingly. If the OLLAMA_API_BASE_URL
is uncommented and set to a specific value, the Quivr application within the Docker container will use this value as the base URL for the Ollama API.
Please note that the exact role of the OLLAMA_API_BASE_URL
in the Quivr application's functionality cannot be determined from the provided context. If you have more information about how you're using the Ollama API in your setup, it would be helpful to provide that.
I hope this helps! If you have any more questions or need further clarification, feel free to ask.
Best, Dosu
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
I tried replacing host.docker.internal to the gateway IP, but I got the same error message. I think the main problem may be that it can not find /api/embeddings for ollama, do you have any more suggestion on how to resolve this?
Updating to the most recent version of ollama2 should fix the problem. In my case I am using Ollama2 server remotely. After updating it, it worked for me.
Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion.