ragflow icon indicating copy to clipboard operation
ragflow copied to clipboard

[Question]: Ollma local model integration

Open kranthicdac opened this issue 1 year ago • 10 comments

Describe your problem

Hi Team,

We're working on integrating the Ollama model deployed locally. While querying the local Ollama gives us the correct responses, we're facing an issue when configuring the base URL in the 'Add Model' tab on the UI—it’s not working. Has anyone tested this feature before? We followed the instructions in the link below to integrate the local models.

https://ragflow.io/docs/dev/deploy_local_llm

kranthicdac avatar Aug 22 '24 06:08 kranthicdac

You need to host Ollama on Server and then you can use the URL of it.

saineshwar avatar Aug 22 '24 06:08 saineshwar

we did the same, still its not working . is it working for you ? can you share detailed steps.

kranthicdac avatar Aug 22 '24 06:08 kranthicdac

After hosting you can access it in the browser. Use this URL while configuration.

image

saineshwar avatar Aug 22 '24 07:08 saineshwar

42ee8a5e-14c9-4b3b-b71a-515704836de8

i'm getting connection refused, eventhough my ollama server is running.

kranthicdac avatar Aug 22 '24 11:08 kranthicdac

Localhost will not work. Host it on Some Server then on it will work.

saineshwar avatar Aug 22 '24 11:08 saineshwar

You need to host Ollama on Server and then you can use the URL of it.

I met the same problem, and I solve the problem by below :1.you can check your ollama conf(/etc/systemd/system/ollama.service) 2.add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3. reload config:systemctl daemon-reload systemctl restart ollama

ThierryHenry1994 avatar Aug 23 '24 01:08 ThierryHenry1994

You need to host Ollama on Server and then you can use the URL of it.

I met the same problem, and I solve the problem by below : 1 - .you can check your ollama conf(/etc/systemd/system/ollama.service) 2 - .add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3 - . reload config:systemctl daemon-reload systemctl restart ollama

4- Download the model to the server. In the sel server ollama terminal you must execute 'ollama pull llama3.1'. 5- Configure RagFlow with the model type, model name and the ollama server url.

sergiomaciel avatar Aug 26 '24 17:08 sergiomaciel

You need to host Ollama on Server and then you can use the URL of it.

I met the same problem, and I solve the problem by below :1.you can check your ollama conf(/etc/systemd/system/ollama.service) 2.add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3. reload config:systemctl daemon-reload systemctl restart ollama

solved!!!

Yuxiang1990 avatar Nov 25 '24 03:11 Yuxiang1990

Describe your problem

Hi Team,

We're working on integrating the Ollama model deployed locally. While querying the local Ollama gives us the correct responses, we're facing an issue when configuring the base URL in the 'Add Model' tab on the UI—it’s not working. Has anyone tested this feature before? We followed the instructions in the link below to integrate the local models.

https://ragflow.io/docs/dev/deploy_local_llm

In the UI, try to set Base URL to http://host.docker.internal:11434 instead of 127.0.0.1 or localhost.

blakkd avatar Dec 19 '24 22:12 blakkd

You need to host Ollama on Server and then you can use the URL of it.

I met the same problem, and I solve the problem by below :1.you can check your ollama conf(/etc/systemd/system/ollama.service) 2.add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3. reload config:systemctl daemon-reload systemctl restart ollama

you cansolve this issue via the following instructions:

sudo nano /etc/systemd/system/ollama.service Environment="OLLAMA_HOST=0.0.0.0"

sudo systemctl daemon-reload sudo systemctl status ollama sudo netstat -tuln | grep 11434

test: curl http://<YOUR_SERVER_IP>:11434

Remember12344 avatar Feb 21 '25 14:02 Remember12344