Verba icon indicating copy to clipboard operation
Verba copied to clipboard

Verba with Docker can't connect to Ollama

Open venanzi-1852473 opened this issue 10 months ago • 6 comments

Description

When I run Verba, deployed with docker compose, set up with Ollama, I manage to see the Verba frontend when opening localhost:8000 on my browser, but it gives this error: 0 documents embedded by Couldn't connect to Ollama http://host.docker.internal:11434, and nothing works. I can send queries, but I receive errors, and I can't add any kind of document.

This problem occured when trying to deploy Verba with Docker on three different machines, with CentOS 8, Ubuntu 20.04.5 LTS and Rocky Linux 9.5. The problem did NOT occur when deploying with Docker, same settings (same docker-compose.yaml), on a Windows 10 machine with WSL.

Installation

  • [ ] pip install goldenverba
  • [ ] pip install from source
  • [x] Docker installation

If you installed via pip, please specify the version:

Weaviate Deployment

  • [ ] Local Deployment
  • [x] Docker Deployment
  • [ ] Cloud Deployment

Configuration

I leave this section blank, since I didn't even manage to get to this part. I just tried to use llama3 model.

Reader: Chunker: Embedder: Retriever: Generator:

Steps to Reproduce

  • Clone the repository with git
  • Start Ollama on the machine with the command ollama serve (checking that Ollama is running, with curl)
  • Set a couple of environment variables in the docker-compose.yaml (explained in Additional context)
  • Run docker compose up -d

Additional context

The variable that I changed in the yaml file is OLLAMA_MODEL=llama3. I also tried with llama3:latest, though, when running it directly from Python (without Verba) setting OLLAMA_MODEL=llama3 in the .env file worked.

I also tried changing OLLAMA_URL=http://host.docker.internal:11434 to OLLAMA_URL=http://localhost:11434; the latter worked when running from Python, but I know that it shouldn't work with Docker (I just tried). It is worth noting that, when setting OLLAMA_URL=http://localhost:11434, the error message on Verba became 0 documents embedded by Couldn't connect to Ollama http://localhost:11434.

I also tried adding OLLAMA_HOST=0.0.0.0, without success.

This is an image of the error

Image

Note that on the bottom the button says "Reconnecting..." because I closed the docker container instance, is it not related to my problem.

venanzi-1852473 avatar Mar 10 '25 08:03 venanzi-1852473

Thanks for the issue! Did I understand correctly that you uncommented the Ollama section of the docker-compose.yml? Seems like Verba has problems reaching the Ollama instance...

thomashacker avatar Mar 17 '25 10:03 thomashacker

It is commented, but there are two things to consider:

  • On Windows I managed to make Verba work with the Ollama part commented, I ran it (Ollama) from outside the Docker container, I manually opened an Ollama instance from the same machine
  • When I tried to uncomment that block I got a parsing error from the yaml parser

Ignoring the parsing error was my fault in being lazy, but since I managed to make it work on Windows without running Ollama from the Docker I tried to not follow that solution.

venanzi-1852473 avatar Mar 20 '25 08:03 venanzi-1852473

There might be some issues with the Docker Compose then, I'll look into it!

thomashacker avatar Mar 20 '25 12:03 thomashacker

Thanks! Running into the same.


version: '3.8'

services:
  verba:
    image: semitechnologies/verba
    ports:
      - 8111:8000
    environment:
      - WEAVIATE_URL_VERBA=http://weaviate:8080
      - OLLAMA_URL=http://host.docker.internal:11434
      - OLLAMA_MODEL=$OLLAMA_MODEL
      - OLLAMA_EMBED_MODEL=bge-m3
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - COHERE_API_KEY=${COHERE_API_KEY}
    volumes:
      - /srv/verba/data:/data/
    depends_on:
      weaviate:
        condition: service_healthy
    healthcheck:
      test: wget --no-verbose --tries=3 --spider http://localhost:8111 || exit 1
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 10s
    networks:
      - homelab

  weaviate:
    command:
      - --host
      - 0.0.0.0
      - --port
      - '8080'
      - --scheme
      - http
    image: semitechnologies/weaviate:1.25.10
    ports:
      - 8080:8080
      - 3000:8080
    volumes:
      - /srv/verba/weaviate/weaviate_data:/var/lib/weaviate
    restart: on-failure:0
    healthcheck:
      test: wget --no-verbose --tries=3 --spider http://localhost:8080/v1/.well-known/ready || exit 1
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 10s
    networks:
      - homelab
    environment:
      OPENAI_APIKEY: ${OPENAI_API_KEY}
      COHERE_APIKEY: ${COHERE_API_KEY}
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      ENABLE_MODULES: 'e'
      CLUSTER_HOSTNAME: 'node1'

  # Ollama service
  ollama:
    image: ollama/ollama:latest
    ports:
      - 7869:11434
    volumes:
      - /srv/ollama/code:/code
      - /srv/ollama/ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    restart: always
    environment:
      - OLLAMA_KEEP_ALIVE=24h
      - OLLAMA_HOST=0.0.0.0
    networks:
      - homelab

  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    volumes:
      - /srv/ollama/open-webui:/app/backend/data
    depends_on:
      - ollama
    ports:
      - 3099:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
      - 'WEBUI_SECRET_KEY='
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped
    networks:
      - homelab

networks:
  homelab:
    external: true

Japhys avatar Mar 27 '25 10:03 Japhys

Considering that all containers run on the same docker network (as indicated by @Japhys compose file), try changing OLLAMA_URL=http://host.docker.internal:11434 to OLLAMA_URL=http://ollama:11434. Internally, docker refers to the ollama container using the pseudo-DNS value specified in the container_name field, which is ollama. Alternatively, OLLAMA_URL=http://localhost:7869 should work fine, too, as this is the local port the ollama container is forwarded to (in @Japhys configuration at least).

tz1112 avatar Aug 14 '25 14:08 tz1112

Hi, I think I found a possible reason for the non-functional OLLAMA-connection in Docker environments: Verba sends requests to Endpoints like

/api/embed
/api/completion

instead of

/api/chat
/api/tags

and requests similar to this when running inside Docker (at least with Docker Desktop under Win11), that the OLLAMA-Backend does not recognise.

I could not find what verba requests to get the list of available models and the endpoint /api/embed should be correct...maybe an issue with the OLLAMA-version since OLLAMA used /api/embeddings in past versions.

I created a workaround (if you are able to provide yourself with a lokal OpenAI-API compatible server like LM Studio does) via an .env-file with the following content:

# API Keys
#Can stay empty, if your local OpenAI compatible Server does not provide one. 
OPENAI_API_KEY= 
OPENAI_BASE_URL=http://your local OpenAI-API-Endpoint-IP-Adress:Port/v1
OPENAI_EMBED_MODEL=your-text-embedding-model
OPENAI_MODEL=your-chat-model 
OPENAI_CUSTOM_EMBED=true #Verba shows the standard openai-Models if you omit this
# OLLAMA Settings
OLLAMA_URL=http://your local OpenAI-API-Endpoint-IP-Adress:Port #IF you omit this you cant configure and see the custom models in "OpenAI" properly in frontend conf, despite the OLLAMA-service does not connect
OLLAMA_MODEL=your-chat-model 
OLLAMA_EMBED_MODEL=your-text-embedding-model

Here is the full and in combination with the above .env working docker-compose.yml (just in case you run into trouble getting it to work with the untouched docker-compose.yml inside the project folder):

services:
  verba:
    build:
      context: ./
      dockerfile: Dockerfile
    ports:
      - 8000:8000
    environment:
      - WEAVIATE_URL_VERBA=http://weaviate:8080
      - OPENAI_API_KEY=$OPENAI_API_KEY
      - OPENAI_BASE_URL=$OPENAI_BASE_URL
      - OPENAI_EMBED_MODEL=$OPENAI_EMBED_MODEL
      - OPENAI_MODEL=$OPENAI_MODEL
      - OPENAI_CUSTOM_EMBED=$OPENAI_CUSTOM_EMBED
      - COHERE_API_KEY=$COHERE_API_KEY
      - OLLAMA_URL=$OLLAMA_URL
      - OLLAMA_MODEL=$OLLAMA_MODEL
      - OLLAMA_EMBED_MODEL=$OLLAMA_EMBED_MODEL
      - UNSTRUCTURED_API_KEY=$UNSTRUCTURED_API_KEY
      - UNSTRUCTURED_API_URL=$UNSTRUCTURED_API_URL
      - GITHUB_TOKEN=$GITHUB_TOKEN

    volumes:
      - ./data:/data/
    depends_on:
      weaviate:
        condition: service_healthy
    healthcheck:
      test: wget --no-verbose --tries=3 --spider http://localhost:8000 || exit 1
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 10s
    networks:
      - ollama-docker

  weaviate:
    command:
      - --host
      - 0.0.0.0
      - --port
      - '8080'
      - --scheme
      - http
    image: semitechnologies/weaviate:1.25.10
    ports:
      - 8080:8080
      - 3000:8080
    volumes:
      - weaviate_data:/var/lib/weaviate
    restart: on-failure:0
    healthcheck:
      test: wget --no-verbose --tries=3 --spider http://localhost:8080/v1/.well-known/ready || exit 1
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 10s
    environment:
      OPENAI_APIKEY: $OPENAI_API_KEY
      COHERE_APIKEY: $COHERE_API_KEY
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      ENABLE_MODULES: 'e'
      CLUSTER_HOSTNAME: 'node1'
    networks:
      - ollama-docker

volumes:
  weaviate_data: {}

networks:
  ollama-docker:
    external: false

Works like a Charm for me. Please keep in mind that you need to configure your chat (Config -> OpenAI) and the embedding processor (Import File -> select a file to import -> config -> Embedder: OpenAI) to use OpenAI after you built verba this way since it is set to OLLAMA after the first start up.

This is not a solution for this issue, just a probable cause and a workaround using another interface for locally provided LLMs.

Hope this helps Nico

NicoBergemann avatar Oct 13 '25 18:10 NicoBergemann