[Question]: Ollma local model integration
Describe your problem
Hi Team,
We're working on integrating the Ollama model deployed locally. While querying the local Ollama gives us the correct responses, we're facing an issue when configuring the base URL in the 'Add Model' tab on the UI—it’s not working. Has anyone tested this feature before? We followed the instructions in the link below to integrate the local models.
https://ragflow.io/docs/dev/deploy_local_llm
You need to host Ollama on Server and then you can use the URL of it.
we did the same, still its not working . is it working for you ? can you share detailed steps.
After hosting you can access it in the browser. Use this URL while configuration.
i'm getting connection refused, eventhough my ollama server is running.
Localhost will not work. Host it on Some Server then on it will work.
You need to host Ollama on Server and then you can use the URL of it.
I met the same problem, and I solve the problem by below :1.you can check your ollama conf(/etc/systemd/system/ollama.service) 2.add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3. reload config:systemctl daemon-reload systemctl restart ollama
You need to host Ollama on Server and then you can use the URL of it.
I met the same problem, and I solve the problem by below : 1 - .you can check your ollama conf(/etc/systemd/system/ollama.service) 2 - .add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3 - . reload config:systemctl daemon-reload systemctl restart ollama
4- Download the model to the server. In the sel server ollama terminal you must execute 'ollama pull llama3.1'. 5- Configure RagFlow with the model type, model name and the ollama server url.
You need to host Ollama on Server and then you can use the URL of it.
I met the same problem, and I solve the problem by below :1.you can check your ollama conf(/etc/systemd/system/ollama.service) 2.add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3. reload config:systemctl daemon-reload systemctl restart ollama
solved!!!
Describe your problem
Hi Team,
We're working on integrating the Ollama model deployed locally. While querying the local Ollama gives us the correct responses, we're facing an issue when configuring the base URL in the 'Add Model' tab on the UI—it’s not working. Has anyone tested this feature before? We followed the instructions in the link below to integrate the local models.
https://ragflow.io/docs/dev/deploy_local_llm
In the UI, try to set Base URL to http://host.docker.internal:11434 instead of 127.0.0.1 or localhost.
You need to host Ollama on Server and then you can use the URL of it.
I met the same problem, and I solve the problem by below :1.you can check your ollama conf(/etc/systemd/system/ollama.service) 2.add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3. reload config:systemctl daemon-reload systemctl restart ollama
you cansolve this issue via the following instructions:
sudo nano /etc/systemd/system/ollama.service Environment="OLLAMA_HOST=0.0.0.0"
sudo systemctl daemon-reload sudo systemctl status ollama sudo netstat -tuln | grep 11434
test: curl http://<YOUR_SERVER_IP>:11434