LLocalSearch icon indicating copy to clipboard operation
LLocalSearch copied to clipboard

Not working out of the box !

Open NSP-0123456 opened this issue 1 year ago • 8 comments

Current github seems to not embed any ollama engine and no quick installation document is provided.

A clear and concise description of the prerequisite and also ollama installation and config document. A better approach could be to also embed the ollama install script inside this repository for docker.

Currently the instruction are useless are it is not working out of the box.

Please elaborate on ollama part. I do not have any instance on my machine and if i get the latest one using docker hub docker pull ollama/ollama it is not not working.

NSP-0123456 avatar Aug 27 '24 06:08 NSP-0123456

2 big step, first is running ollama with a model ->install ollama, use open-webui to manage it ; second is running docker instance of llocalsearch

1 -> install ollama -> ollama 2 -> with docker, install open-webui (open-webui) with this command in shell (cmd) git clone https://github.com/open-webui/open-webui cd open-webui docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main go -> http://localhost:3000/ down left side / admin panels go settings -> in part models enter mistral:v0.3, and on the right click on icon download you are good with ollama. if you want another model go https://ollama.com/ and serah your model, then look for tags you want (like llama3.1:8b but it didn't work well) now llocalsearch go back in main folder inside shell (cd ..) git clone https://github.com/nilsherzig/LLocalSearch edit docker-compose.yaml in order to change port line 20. change '3000:80' to '3001:80' docker-compose up -d go http://localhost:3001/chat/new In top right, you should be able to select your mistral model in The agent chain is using the model "" close. it should be working -> try asking something

Arnaud3013 avatar Sep 03 '24 20:09 Arnaud3013

You want to set: OLLAMA_HOST

Please review OLLAMA_GUIDE.md

gardner avatar Sep 04 '24 06:09 gardner

I am quite frustrated - have followed all instructions to the letter - still getting:

Model nomic-embed-text:v1.5 does not exist and could not be pulled: Post "http://0.0.0.0:11434/api/pull": dial tcp 0.0.0.0:11434: connect: connection refused

Have updated .env file to include: OLLAMA_HOST=host.docker.internal:11434

Have updated line 20 in docker-compose to: 3001:80

I am not running Ollama in Docker, but instead is the Windows install. When I navigate to http://host.docker.internal:11434/ Ollama is running.

I have set Environmental Variable OLLAMA_HOST to Value 0.0.0.0

Would appreciate any advice, as I have run out of solutions.

pcrothers91 avatar Sep 24 '24 13:09 pcrothers91

If you're not using docker for ollama, update env to reflect that. Maybe OLLAMA_HOST=http://127.0.0.1:11434 Or OLLAMA_HOST=localhost:11434

Don't use docker related thing with ollama if running without docker

Arnaud3013 avatar Sep 24 '24 15:09 Arnaud3013

Thank you for responding @Arnaud3013, much appreciated.

Unfortunately its still not working - but I have made progress. I can see why pointing to docker in .env would not work now.

With the following, I am getting this error:

Model nomic-embed-text:v1.5 does not exist and could not be pulled: Post "http://127.0.0.1:11434/api/pull": dial tcp 127.0.0.1:11434: connect: connection refused

When I navigate to http://127.0.0.1:11434/api/pull Ollama is responding 404 page not found.

Here is docker-compose:

services:
  backend:
    image: nilsherzig/llocalsearch-backend:latest
    environment:
      - OLLAMA_HOST=${OLLAMA_HOST:-http://127.0.0.1:11434}
      - CHROMA_DB_URL=${CHROMA_DB_URL:-http://chromadb:8000}
      - SEARXNG_DOMAIN=${SEARXNG_DOMAIN:-http://searxng:8080}

Here is .env:

OLLAMA_HOST=http://127.0.0.1:11434
MAX_ITERATIONS=30
CHROMA_DB_URL=http://chromadb:8000
SEARXNG_DOMAIN=http://searxng:8080
SEARXNG_HOSTNAME=localhost`

Am I obviously doing something wrong? I think LLocalSearch is speaking to Ollama, but they are having a hard time understanding one another.

pcrothers91 avatar Sep 24 '24 22:09 pcrothers91

Hi, thought I would come back here and post my settings that I have used to get this to work:

Here is .env

OLLAMA_HOST=http://host.docker.internal:11434
MAX_ITERATIONS=30
CHROMA_DB_URL=http://chromadb:8000
SEARXNG_DOMAIN=http://searxng:8080
SEARXNG_HOSTNAME=localhost

Here is docker-compose.yaml

build:
      context: ./backend
      dockerfile: Dockerfile.dev
    environment:
      - OLLAMA_HOST=${OLLAMA_HOST:-host.docker.internal:11434}
      - CHROMA_DB_URL=${CHROMA_DB_URL:-http://chromadb:8000}
      - SEARXNG_DOMAIN=${SEARXNG_DOMAIN:-http://searxng:8080}
      - EMBEDDINGS_MODEL_NAME=${EMBEDDINGS_MODEL_NAME:-nomic-embed-text:v1.5}
      - VERSION=${VERSION:-dev}

Want to also point out that I followed @pmancele suggestions for container network communication at this post: https://github.com/nilsherzig/LLocalSearch/issues/116

See their code snippet adding ports to docker-compose below:

@@ -39,6 +39,8 @@ services:
       - SETGID
       - SETUID
       - DAC_OVERRIDE
+    ports:
+      - '6379:6379'

   searxng:
     image: docker.io/searxng/searxng:latest
@@ -60,6 +62,8 @@ services:
       options:
         max-size: '1m'
         max-file: '1'
+    ports:
+      - '8080:8080'

pcrothers91 avatar Sep 28 '24 09:09 pcrothers91

Did you had great success with your exchanges, once working? Did you got good answer? If yes, with which model?

Arnaud3013 avatar Sep 28 '24 11:09 Arnaud3013

I did have success, thank you.

I have been trialling several models, and most recently this model has worked well - Reader LM

Unfortunately Phi3.5 often gets stuck in a loop.

I have had decent success with larger models too, like Llama3.1 and Mistral-Nemo, but often output is not in markdown language, which produces an error.

What model you recommend?

pcrothers91 avatar Oct 01 '24 11:10 pcrothers91