WrenAI icon indicating copy to clipboard operation
WrenAI copied to clipboard

wren-ai-service cannot start

Open haiyangzhijia opened this issue 6 months ago • 8 comments

Describe the bug The wren-ai-service cannot start, while all other containers are running normally and the web UI can be accessed. However, when creating issues and deploying modeling, it prompts "failed to create asking task" and "failed to deploy changes. The review found a connection error: ECONNREFUSED 172.26.0.5:5555."

Expected behavior Make the wren-ai-service able to start normally.

Desktop (please complete the following information):

  • OS: Synology nas 7.2 docker

Relevant log output

docker-compose.yaml:

version: "3"

networks:
  wren:
    driver: bridge

services:
  bootstrap:
    container_name: wrenai_bootstrap
    image: ghcr.io/canner/wren-bootstrap:${WREN_BOOTSTRAP_VERSION}
    restart: on-failure
    platform: ${PLATFORM}
    environment:
      DATA_PATH: /app/data
    volumes:
      - ./data:/app/data
    command: /bin/sh /app/init.sh

  wren-engine:
    container_name: wrenai_engine
    image: ghcr.io/canner/wren-engine:${WREN_ENGINE_VERSION}
    restart: on-failure
    platform: ${PLATFORM}
    expose:
      - ${WREN_ENGINE_PORT}
      - ${WREN_ENGINE_SQL_PORT}
    volumes:
      - ./data:/usr/src/app/etc
      - ./data:/usr/src/app/data
    networks:
      - wren
    depends_on:
      - bootstrap

  ibis-server:
    container_name: wrenai_ibis_server
    image: ghcr.io/canner/wren-engine-ibis:${IBIS_SERVER_VERSION}
    restart: on-failure
    platform: ${PLATFORM}
    expose:
      - ${IBIS_SERVER_PORT}
    environment:
      WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
    networks:
      - wren

  wren-ai-service:
    container_name: wrenai_ai_service
    image: ghcr.io/canner/wren-ai-service:${WREN_AI_SERVICE_VERSION}
    restart: on-failure
    platform: ${PLATFORM}
    expose:
      - ${WREN_AI_SERVICE_PORT}
    ports:
      - ${AI_SERVICE_FORWARD_PORT}:${WREN_AI_SERVICE_PORT}
    environment:
      PYTHONUNBUFFERED: 1
      CONFIG_PATH: /app/data/config.yaml
    env_file:
      - .env
    volumes:
      - ./config.yaml:/app/data/config.yaml
    networks:
      - wren
    depends_on:
      - qdrant

  qdrant:
    container_name: wrenai_qdrant
    image: qdrant/qdrant:v1.14.1
    restart: on-failure
    expose:
      - 6333
      - 6334
    volumes:
      - ./data:/qdrant/storage
    networks:
      - wren

  wren-ui:
    container_name: wrenai_ui
    image: ghcr.io/canner/wren-ui:${WREN_UI_VERSION}
    restart: on-failure
    platform: ${PLATFORM}
    environment:
      DB_TYPE: sqlite
      SQLITE_FILE: /app/data/db.sqlite3
      WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
      WREN_AI_ENDPOINT: http://wren-ai-service:${WREN_AI_SERVICE_PORT}
      IBIS_SERVER_ENDPOINT: http://ibis-server:${IBIS_SERVER_PORT}
      GENERATION_MODEL: ${GENERATION_MODEL}
      WREN_ENGINE_PORT: ${WREN_ENGINE_PORT}
      WREN_AI_SERVICE_VERSION: ${WREN_AI_SERVICE_VERSION}
      WREN_UI_VERSION: ${WREN_UI_VERSION}
      WREN_ENGINE_VERSION: ${WREN_ENGINE_VERSION}
      USER_UUID: ${USER_UUID}
      POSTHOG_API_KEY: ${POSTHOG_API_KEY}
      POSTHOG_HOST: ${POSTHOG_HOST}
      TELEMETRY_ENABLED: ${TELEMETRY_ENABLED}
      NEXT_PUBLIC_USER_UUID: ${USER_UUID}
      NEXT_PUBLIC_POSTHOG_API_KEY: ${POSTHOG_API_KEY}
      NEXT_PUBLIC_POSTHOG_HOST: ${POSTHOG_HOST}
      NEXT_PUBLIC_TELEMETRY_ENABLED: ${TELEMETRY_ENABLED}
      EXPERIMENTAL_ENGINE_RUST_VERSION: ${EXPERIMENTAL_ENGINE_RUST_VERSION}
      WREN_PRODUCT_VERSION: ${WREN_PRODUCT_VERSION}
    ports:
      - ${HOST_PORT}:3000
    volumes:
      - ./data:/app/data
    networks:
      - wren
    depends_on:
      - wren-ai-service
      - wren-engine

config.yaml:

# you should rename this file to config.yaml and put it in ~/.wrenai
# please pay attention to the comments starting with # and adjust the config accordingly, 3 steps basically:
# 1. you need to use your own llm and embedding models
# 2. fill in embedding model dimension in the document_store section
# 3. you need to use the correct pipe definitions based on https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
# 4. you need to fill in correct llm and embedding models in the pipe definitions

type: llm
provider: litellm_llm
models:
  # put GEMINI_API_KEY=<your_api_key> in ~/.wrenai/.env
  - model: gemini/gemini-2.0-flash # gemini/<gemini_model_name>
    alias: default
    timeout: 120
    kwargs:
      n: 1
      temperature: 0
  - model: gemini/gemini-2.0-flash # gemini/<gemini_model_name>
    alias: gemini-llm-for-chart
    timeout: 120
    kwargs:
      n: 1
      temperature: 0
      response_format:
        type: json_object

---
type: embedder
provider: litellm_embedder
models:
  # put GEMINI_API_KEY=<your_api_key> in ~/.wrenai/.env
  - model: gemini/text-embedding-004 # gemini/<gemini_model_name>
    alias: default
    timeout: 120

---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000

---
type: engine
provider: wren_ibis
endpoint: http://ibis-server:8000

---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 768 # put your embedding model dimension here
timeout: 120
recreate_index: true

---
# please change the llm and embedder names to the ones you want to use
# the format of llm and embedder should be <provider>.<model_name> such as litellm_llm.gpt-4o-2024-08-06 or <provider>.<alias> such as litellm_llm.gemini-llm-for-chart
# the pipes may be not the latest version, please refer to the latest version: https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
type: pipeline
pipes:
  - name: db_schema_indexing
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: historical_question_indexing
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: table_description_indexing
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: db_schema_retrieval
    llm: litellm_llm.default
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: historical_question_retrieval
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: sql_generation
    llm: litellm_llm.default
    engine: wren_ui
  - name: sql_correction
    llm: litellm_llm.default
    engine: wren_ui
    document_store: qdrant
  - name: followup_sql_generation
    llm: litellm_llm.default
    engine: wren_ui
  - name: sql_answer
    llm: litellm_llm.default
  - name: semantics_description
    llm: litellm_llm.default
  - name: relationship_recommendation
    llm: litellm_llm.default
    engine: wren_ui
  - name: question_recommendation
    llm: litellm_llm.default
  - name: question_recommendation_db_schema_retrieval
    llm: litellm_llm.default
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: question_recommendation_sql_generation
    llm: litellm_llm.default
    engine: wren_ui
  - name: chart_generation
    llm: litellm_llm.gemini-llm-for-chart
  - name: chart_adjustment
    llm: litellm_llm.gemini-llm-for-chart
  - name: intent_classification
    llm: litellm_llm.default
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: misleading_assistance
    llm: litellm_llm.default
  - name: data_assistance
    llm: litellm_llm.default
  - name: sql_pairs_indexing
    document_store: qdrant
    embedder: litellm_embedder.default
  - name: sql_pairs_retrieval
    document_store: qdrant
    embedder: litellm_embedder.default
    llm: litellm_llm.default
  - name: preprocess_sql_data
    llm: litellm_llm.default
  - name: sql_executor
    engine: wren_ui
  - name: user_guide_assistance
    llm: litellm_llm.default
  - name: sql_question_generation
    llm: litellm_llm.default
  - name: sql_generation_reasoning
    llm: litellm_llm.default
  - name: followup_sql_generation_reasoning
    llm: litellm_llm.default
  - name: sql_regeneration
    llm: litellm_llm.default
    engine: wren_ui
  - name: instructions_indexing
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: instructions_retrieval
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: sql_functions_retrieval
    engine: wren_ibis
    document_store: qdrant
  - name: project_meta_indexing
    document_store: qdrant
  - name: sql_tables_extraction
    llm: litellm_llm.default

---
settings:
  engine_timeout: 30
  column_indexing_batch_size: 50
  table_retrieval_size: 10
  table_column_retrieval_size: 100
  allow_intent_classification: true
  allow_sql_generation_reasoning: true
  allow_sql_functions_retrieval: true
  enable_column_pruning: false
  max_sql_correction_retries: 3
  query_cache_maxsize: 1000
  query_cache_ttl: 3600
  langfuse_host: https://cloud.langfuse.com
  langfuse_enable: true
  logging_level: DEBUG
  development: true
  historical_question_retrieval_similarity_threshold: 0.9
  sql_pairs_similarity_threshold: 0.7
  sql_pairs_retrieval_max_size: 10
  instructions_similarity_threshold: 0.7
  instructions_top_k: 10

env:

COMPOSE_PROJECT_NAME=wrenai
PLATFORM=linux/amd64

PROJECT_DIR=.

# service port
WREN_ENGINE_PORT=8188
WREN_ENGINE_SQL_PORT=7432
WREN_AI_SERVICE_PORT=5555
WREN_UI_PORT=3133
IBIS_SERVER_PORT=8000
WREN_UI_ENDPOINT=http://wren-ui:${WREN_UI_PORT}

# ai service settings
QDRANT_HOST=qdrant
SHOULD_FORCE_DEPLOY=1

# vendor keys
GEMINI_API_KEY=

# version
# CHANGE THIS TO THE LATEST VERSION
WREN_PRODUCT_VERSION=0.24.0
WREN_ENGINE_VERSION=0.16.4
WREN_AI_SERVICE_VERSION=0.24.2
IBIS_SERVER_VERSION=0.16.4
WREN_UI_VERSION=0.29.2
WREN_BOOTSTRAP_VERSION=0.1.5

# user id (uuid v4)
USER_UUID=

# for other services
POSTHOG_API_KEY=
POSTHOG_HOST=https://app.posthog.com
TELEMETRY_ENABLED=true
# this is for telemetry to know the model, i think ai-service might be able to provide a endpoint to get the information
GENERATION_MODEL=gpt-4o-mini
LANGFUSE_SECRET_KEY=
LANGFUSE_PUBLIC_KEY=

# the port exposes to the host
# OPTIONAL: change the port if you have a conflict
HOST_PORT=3133
AI_SERVICE_FORWARD_PORT=5555

# Wren UI
EXPERIMENTAL_ENGINE_RUST_VERSION=false

wren-ai-service log:

2025/06/25 17:57:31 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:57:31 | stderr | INFO:     Started server process [8]
2025/06/25 17:57:21 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:57:21 | stdout | qdrant has started.
2025/06/25 17:57:21 | stdout | Waiting for qdrant to start...
2025/06/25 17:57:13 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:56:25 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:56:25 | stderr | INFO:     Started server process [8]
2025/06/25 17:56:13 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:56:13 | stdout | qdrant has started.
2025/06/25 17:56:13 | stdout | Waiting for qdrant to start...
2025/06/25 17:55:56 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:55:56 | stdout | qdrant has started.
2025/06/25 17:55:56 | stdout | Waiting for qdrant to start...
2025/06/25 17:55:37 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:54:49 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:54:49 | stderr | INFO:     Started server process [8]
2025/06/25 17:54:35 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:54:35 | stdout | qdrant has started.
2025/06/25 17:54:35 | stdout | Waiting for qdrant to start...
2025/06/25 17:54:24 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:53:40 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:53:40 | stderr | INFO:     Started server process [7]
2025/06/25 17:53:24 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:53:24 | stdout | qdrant has started.
2025/06/25 17:53:24 | stdout | Waiting for qdrant to start...
2025/06/25 17:52:58 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:52:08 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:52:08 | stderr | INFO:     Started server process [7]
2025/06/25 17:51:57 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:51:57 | stdout | qdrant has started.
2025/06/25 17:51:57 | stdout | Waiting for qdrant to start...
2025/06/25 17:51:23 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:50:33 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:50:33 | stderr | INFO:     Started server process [9]
2025/06/25 17:50:23 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:50:23 | stdout | qdrant has started.
2025/06/25 17:50:23 | stdout | Waiting for qdrant to start...
2025/06/25 17:50:08 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:49:18 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:49:18 | stderr | INFO:     Started server process [7]
2025/06/25 17:49:07 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:49:07 | stdout | qdrant has started.
2025/06/25 17:49:07 | stdout | Waiting for qdrant to start...
2025/06/25 17:48:50 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:48:05 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:48:05 | stderr | INFO:     Started server process [8]
2025/06/25 17:47:49 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:47:49 | stdout | qdrant has started.
2025/06/25 17:47:49 | stdout | Waiting for qdrant to start...
2025/06/25 17:47:37 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:46:55 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:46:55 | stderr | INFO:     Started server process [8]
2025/06/25 17:46:37 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:46:37 | stdout | qdrant has started.
2025/06/25 17:46:37 | stdout | Waiting for qdrant to start...
2025/06/25 17:46:28 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:45:38 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:45:38 | stderr | INFO:     Started server process [9]
2025/06/25 17:45:27 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:45:27 | stdout | qdrant has started.
2025/06/25 17:45:27 | stdout | Waiting for qdrant to start...
2025/06/25 17:45:13 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:45:13 | stderr | INFO:     Started server process [8]
2025/06/25 17:44:56 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:44:56 | stdout | qdrant has started.
2025/06/25 17:44:56 | stdout | Waiting for qdrant to start...
2025/06/25 17:44:46 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:44:46 | stderr | W0625 09:44:46.028 8 wren-ai-service:155] SQL pairs file not found: sql_pairs.json
2025/06/25 17:43:56 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:43:56 | stderr | INFO:     Started server process [8]
2025/06/25 17:43:46 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:43:46 | stdout | qdrant has started.
2025/06/25 17:43:46 | stdout | Waiting for qdrant to start...
2025/06/25 17:43:40 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:42:52 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:42:52 | stderr | INFO:     Started server process [8]
2025/06/25 17:42:39 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:42:39 | stdout | qdrant has started.
2025/06/25 17:42:39 | stdout | Waiting for qdrant to start...
2025/06/25 17:42:34 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:41:48 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:41:48 | stderr | INFO:     Started server process [9]
2025/06/25 17:41:34 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:41:34 | stdout | qdrant has started.
2025/06/25 17:41:34 | stdout | Waiting for qdrant to start...
2025/06/25 17:41:29 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:41:28 | stderr | W0625 09:41:28.271 8 wren-ai-service:155] SQL pairs file not found: sql_pairs.json
2025/06/25 17:40:40 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:40:40 | stderr | INFO:     Started server process [8]
2025/06/25 17:40:28 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:40:28 | stdout | qdrant has started.
2025/06/25 17:40:28 | stdout | Waiting for qdrant to start...
2025/06/25 17:40:19 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:40:16 | stderr | W0625 09:40:16.579 8 wren-ai-service:155] SQL pairs file not found: sql_pairs.json
2025/06/25 17:39:29 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:39:29 | stderr | INFO:     Started server process [8]
2025/06/25 17:39:19 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:39:19 | stdout | qdrant has started.
2025/06/25 17:39:19 | stdout | Waiting for qdrant to start...
2025/06/25 17:39:14 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:38:24 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:38:24 | stderr | INFO:     Started server process [8]
2025/06/25 17:38:13 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:38:13 | stdout | qdrant has started.
2025/06/25 17:38:13 | stdout | Waiting for qdrant to start...
2025/06/25 17:37:40 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:36:50 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:36:50 | stderr | INFO:     Started server process [8]
2025/06/25 17:36:40 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:36:40 | stdout | qdrant has started.
2025/06/25 17:36:40 | stdout | Waiting for qdrant to start...
2025/06/25 17:36:31 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:35:42 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:35:42 | stderr | INFO:     Started server process [9]
2025/06/25 17:35:30 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:35:30 | stdout | qdrant has started.
2025/06/25 17:35:30 | stdout | Waiting for qdrant to start...
2025/06/25 17:35:08 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:34:18 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:34:18 | stderr | INFO:     Started server process [8]
2025/06/25 17:34:07 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:34:07 | stdout | qdrant has started.
2025/06/25 17:34:07 | stdout | Waiting for qdrant to start...
2025/06/25 17:33:48 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:33:08 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:33:08 | stderr | INFO:     Started server process [8]
2025/06/25 17:32:48 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:32:48 | stdout | qdrant has started.
2025/06/25 17:32:48 | stdout | Waiting for qdrant to start...
2025/06/25 17:32:33 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:31:48 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:31:48 | stderr | INFO:     Started server process [8]
2025/06/25 17:31:32 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:31:32 | stdout | qdrant has started.
2025/06/25 17:31:32 | stdout | Waiting for qdrant to start...
2025/06/25 17:31:03 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:30:14 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:30:14 | stderr | INFO:     Started server process [8]
2025/06/25 17:30:02 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:30:02 | stdout | qdrant has started.
2025/06/25 17:30:02 | stdout | Waiting for qdrant to start...
2025/06/25 17:29:45 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:29:06 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:29:06 | stderr | INFO:     Started server process [8]
2025/06/25 17:28:45 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:28:45 | stdout | qdrant has started.
2025/06/25 17:28:45 | stdout | Waiting for qdrant to start...
2025/06/25 17:27:42 | stdout | Timeout: wren-ai-service did not start within 60 seconds
2025/06/25 17:27:21 | stderr | INFO:     Waiting for application startup.
2025/06/25 17:27:21 | stderr | INFO:     Started server process [7]
2025/06/25 17:26:42 | stdout | Waiting for wren-ai-service to start...
2025/06/25 17:26:42 | stdout | qdrant has started.
2025/06/25 17:26:42 | stdout | Waiting for qdrant to start...

haiyangzhijia avatar Jun 25 '25 10:06 haiyangzhijia

Later, I created an empty file named sql pairs.json under the data directory, inserted an empty array [], and the wren-ai-service no longer prompted for sql pairs.json, but the wren-ai-service still couldn't start successfully.

haiyangzhijia avatar Jun 25 '25 11:06 haiyangzhijia

@haiyangzhijia please use the same version of WrenAI's qdrant, and try again

cyyeh avatar Jun 25 '25 11:06 cyyeh

@haiyangzhijia please use the same version of WrenAI's qdrant, and try again

Hello, thank you for the reply. I didn't understand if I'm supposed to use the qdrant/qdrant:v1.11.0 version? I first used the qdrant/qdrant:v1.11.0 version, encountered the same issue, and then upgraded the version.

haiyangzhijia avatar Jun 25 '25 11:06 haiyangzhijia

you should use the same version as the official one first. then there should be other issues.

cyyeh avatar Jun 25 '25 11:06 cyyeh

please do not change the content of docker-compose.yaml, you should only need to update values in config.yaml or .env

cyyeh avatar Jun 25 '25 11:06 cyyeh

I got the same error. I used Gemini too.

ansutung avatar Jun 30 '25 11:06 ansutung

I got the same error. I used anthropic too. 2025-07-04 19:30:08 no port[s] to connect to 2025-07-04 19:30:08 no port[s] to connect to 2025-07-04 19:30:09 no port[s] to connect to 2025-07-04 19:30:09 no port[s] to connect to 2025-07-04 19:30:10 no port[s] to connect to 2025-07-04 19:30:10 no port[s] to connect to 2025-07-04 19:30:11 no port[s] to connect to 2025-07-04 19:30:11 no port[s] to connect to 2025-07-04 19:30:12 no port[s] to connect to 2025-07-04 19:30:12 no port[s] to connect to 2025-07-04 19:30:13 no port[s] to connect to 2025-07-04 19:30:13 no port[s] to connect to 2025-07-04 19:30:14 no port[s] to connect to 2025-07-04 19:30:14 no port[s] to connect to 2025-07-04 19:30:15 no port[s] to connect to 2025-07-04 19:30:15 no port[s] to connect to 2025-07-04 19:30:16 no port[s] to connect to 2025-07-04 19:30:16 no port[s] to connect to 2025-07-04 19:30:17 no port[s] to connect to 2025-07-04 19:30:17 no port[s] to connect to 2025-07-04 19:30:18 no port[s] to connect to 2025-07-04 19:30:18 no port[s] to connect to 2025-07-04 19:30:19 no port[s] to connect to 2025-07-04 19:30:19 no port[s] to connect to 2025-07-04 19:30:20 no port[s] to connect to 2025-07-04 19:30:20 no port[s] to connect to 2025-07-04 19:30:21 no port[s] to connect to 2025-07-04 19:30:21 no port[s] to connect to 2025-07-04 19:30:22 no port[s] to connect to 2025-07-04 19:30:22 no port[s] to connect to 2025-07-04 19:30:23 no port[s] to connect to 2025-07-04 19:30:23 no port[s] to connect to 2025-07-04 19:30:24 no port[s] to connect to 2025-07-04 19:30:24 no port[s] to connect to 2025-07-04 19:30:25 no port[s] to connect to 2025-07-04 19:30:25 no port[s] to connect to 2025-07-04 19:30:26 no port[s] to connect to 2025-07-04 19:30:26 no port[s] to connect to 2025-07-04 19:30:27 no port[s] to connect to 2025-07-04 19:30:27 no port[s] to connect to 2025-07-04 19:30:28 no port[s] to connect to 2025-07-04 19:30:28 no port[s] to connect to 2025-07-04 19:30:29 no port[s] to connect to 2025-07-04 19:30:29 no port[s] to connect to 2025-07-04 19:30:30 no port[s] to connect to 2025-07-04 19:30:30 no port[s] to connect to 2025-07-04 19:30:31 no port[s] to connect to 2025-07-04 19:30:31 no port[s] to connect to 2025-07-04 19:30:32 no port[s] to connect to 2025-07-04 19:30:32 no port[s] to connect to 2025-07-04 19:30:33 no port[s] to connect to 2025-07-04 19:30:33 no port[s] to connect to 2025-07-04 19:30:34 no port[s] to connect to 2025-07-04 19:30:34 no port[s] to connect to 2025-07-04 19:30:35 no port[s] to connect to 2025-07-04 19:30:35 no port[s] to connect to 2025-07-04 19:30:36 no port[s] to connect to 2025-07-04 19:30:36 no port[s] to connect to 2025-07-04 19:30:37 no port[s] to connect to 2025-07-04 19:30:38 no port[s] to connect to 2025-07-04 19:30:39 no port[s] to connect to 2025-07-04 19:30:39 no port[s] to connect to 2025-07-04 19:30:40 no port[s] to connect to 2025-07-04 19:30:40 no port[s] to connect to 2025-07-04 19:30:41 no port[s] to connect to 2025-07-04 19:30:41 no port[s] to connect to 2025-07-04 19:30:42 no port[s] to connect to 2025-07-04 19:30:42 no port[s] to connect to 2025-07-04 19:30:43 no port[s] to connect to 2025-07-04 19:30:43 no port[s] to connect to 2025-07-04 19:30:44 no port[s] to connect to 2025-07-04 19:30:44 no port[s] to connect to 2025-07-04 19:30:45 no port[s] to connect to 2025-07-04 19:30:45 no port[s] to connect to 2025-07-04 19:30:46 no port[s] to connect to 2025-07-04 19:30:46 no port[s] to connect to 2025-07-04 19:30:47 no port[s] to connect to 2025-07-04 19:30:47 no port[s] to connect to 2025-07-04 19:30:48 no port[s] to connect to 2025-07-04 19:30:48 no port[s] to connect to 2025-07-04 19:30:49 no port[s] to connect to 2025-07-04 19:30:49 no port[s] to connect to 2025-07-04 19:30:50 no port[s] to connect to 2025-07-04 19:30:50 no port[s] to connect to 2025-07-04 19:30:51 no port[s] to connect to 2025-07-04 19:30:51 no port[s] to connect to 2025-07-04 19:30:52 no port[s] to connect to 2025-07-04 19:30:52 no port[s] to connect to 2025-07-04 19:30:53 no port[s] to connect to 2025-07-04 19:30:53 no port[s] to connect to 2025-07-04 19:30:54 no port[s] to connect to 2025-07-04 19:30:54 no port[s] to connect to 2025-07-04 19:30:55 no port[s] to connect to 2025-07-04 19:30:55 no port[s] to connect to 2025-07-04 19:30:56 no port[s] to connect to 2025-07-04 19:30:56 no port[s] to connect to 2025-07-04 19:30:57 no port[s] to connect to 2025-07-04 19:30:57 no port[s] to connect to 2025-07-04 19:30:58 no port[s] to connect to 2025-07-04 19:30:58 no port[s] to connect to 2025-07-04 19:30:59 no port[s] to connect to 2025-07-04 19:31:00 no port[s] to connect to 2025-07-04 19:31:01 no port[s] to connect to 2025-07-04 19:31:01 no port[s] to connect to 2025-07-04 19:31:02 no port[s] to connect to 2025-07-04 19:31:02 no port[s] to connect to 2025-07-04 19:31:03 no port[s] to connect to 2025-07-04 19:31:03 no port[s] to connect to 2025-07-04 19:31:04 no port[s] to connect to 2025-07-04 19:31:04 no port[s] to connect to 2025-07-04 19:31:05 no port[s] to connect to 2025-07-04 19:31:05 no port[s] to connect to 2025-07-04 19:31:06 no port[s] to connect to 2025-07-04 19:31:06 no port[s] to connect to 2025-07-04 19:31:07 no port[s] to connect to 2025-07-04 19:31:07 no port[s] to connect to 2025-07-04 19:31:08 no port[s] to connect to 2025-07-04 19:31:08 no port[s] to connect to 2025-07-04 19:31:09 no port[s] to connect to 2025-07-04 19:31:09 no port[s] to connect to 2025-07-04 19:31:10 Timeout: wren-ui did not start within 60 seconds 2025-07-04 19:31:18 Waiting for qdrant to start... 2025-07-04 19:31:18 qdrant has started. 2025-07-04 19:31:18 Waiting for wren-ai-service to start... 2025-07-04 19:31:21 INFO: Started server process [8] 2025-07-04 19:31:21 INFO: Waiting for application startup. 2025-07-04 19:31:25 W0704 14:01:25.962 8 wren-ai-service:155] SQL pairs file not found: sql_pairs.json 2025-07-04 19:31:27 W0704 14:01:27.000 8 wren-ai-service:155] SQL pairs file not found: sql_pairs.json 2025-07-04 19:31:27 I0704 14:01:27.199 8 wren-ai-service:326] Service version: 0.25.6 2025-07-04 19:31:27 Langfuse client is disabled since no public_key was provided as a parameter or environment variable 'LANGFUSE_PUBLIC_KEY'. See our docs: https://langfuse.com/docs/sdk/python/low-level-sdk#initialize-client 2025-07-04 19:31:27 I0704 14:01:27.245 8 wren-ai-service:86] LANGFUSE_ENABLE: True 2025-07-04 19:31:27 I0704 14:01:27.245 8 wren-ai-service:87] LANGFUSE_HOST: https://cloud.langfuse.com 2025-07-04 19:31:27 INFO: Application startup complete. 2025-07-04 19:31:27 INFO: Uvicorn running on http://0.0.0.0:3001 (Press CTRL+C to quit) 2025-07-04 19:31:27 wren-ai-service has started.

blmeenabiz avatar Jul 04 '25 14:07 blmeenabiz

same here!

wren-ai-service-1 | ERROR: Traceback (most recent call last): wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 692, in lifespan wren-ai-service-1 | async with self.lifespan_context(app) as maybe_state: wren-ai-service-1 | File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter wren-ai-service-1 | return await anext(self.gen) wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^ wren-ai-service-1 | File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan wren-ai-service-1 | async with original_context(app) as maybe_original_state: wren-ai-service-1 | File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter wren-ai-service-1 | return await anext(self.gen) wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^ wren-ai-service-1 | File "/src/main.py", line 32, in lifespan wren-ai-service-1 | app.state.service_container = create_service_container(pipe_components, settings) wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ wren-ai-service-1 | File "/src/globals.py", line 129, in create_service_container wren-ai-service-1 | "sql_correction": generation.SQLCorrection( wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^ wren-ai-service-1 | File "/src/pipelines/generation/sql_correction.py", line 108, in init wren-ai-service-1 | self._retriever = document_store_provider.get_retriever( wren-ai-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ wren-ai-service-1 | AttributeError: 'NoneType' object has no attribute 'get_retriever' wren-ai-service-1 | wren-ai-service-1 | ERROR: Application startup failed. Exiting.

docker-compose file:

volumes: data: ollama: postgres_data:

networks: wren: driver: bridge shihome_net: external: true

services: bootstrap: image: ghcr.io/canner/wren-bootstrap:${WREN_BOOTSTRAP_VERSION} restart: on-failure platform: ${PLATFORM} environment: DATA_PATH: /app/data volumes: - data:/app/data networks: - wren command: /bin/sh /app/init.sh

wren-engine: image: ghcr.io/canner/wren-engine:${WREN_ENGINE_VERSION} restart: on-failure platform: ${PLATFORM} expose: - ${WREN_ENGINE_PORT} - ${WREN_ENGINE_SQL_PORT} volumes: - data:/usr/src/app/etc - ${PROJECT_DIR}/data:/usr/src/app/data networks: - wren - shihome_net depends_on: - bootstrap

ibis-server: image: ghcr.io/canner/wren-engine-ibis:${IBIS_SERVER_VERSION} restart: on-failure platform: ${PLATFORM} expose: - ${IBIS_SERVER_PORT} environment: WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT} networks: - wren - shihome_net

wren-ai-service: image: ghcr.io/canner/wren-ai-service:${WREN_AI_SERVICE_VERSION} restart: on-failure platform: ${PLATFORM} expose: - ${WREN_AI_SERVICE_PORT} environment: PYTHONUNBUFFERED: 1 CONFIG_PATH: /app/data/config.yaml env_file: - ${PROJECT_DIR}/.env volumes: - ${PROJECT_DIR}/config.yaml:/app/data/config.yaml networks: - wren - shihome_net depends_on: - qdrant

qdrant: image: qdrant/qdrant:v1.11.0 restart: on-failure expose: - 6333 - 6334 ports: - "6333:6333" - "6334:6334" volumes: - data:/qdrant/storage networks: - wren - shihome_net

wren-ui: image: ghcr.io/canner/wren-ui:${WREN_UI_VERSION} restart: on-failure platform: ${PLATFORM} environment: DB_TYPE: sqlite SQLITE_FILE: /app/data/db.sqlite3 WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT} WREN_AI_ENDPOINT: http://wren-ai-service:${WREN_AI_SERVICE_PORT} IBIS_SERVER_ENDPOINT: http://ibis-server:${IBIS_SERVER_PORT} GENERATION_MODEL: ${GENERATION_MODEL} WREN_ENGINE_PORT: ${WREN_ENGINE_PORT} WREN_AI_SERVICE_VERSION: ${WREN_AI_SERVICE_VERSION} WREN_UI_VERSION: ${WREN_UI_VERSION} WREN_ENGINE_VERSION: ${WREN_ENGINE_VERSION} USER_UUID: ${USER_UUID} POSTHOG_API_KEY: ${POSTHOG_API_KEY} POSTHOG_HOST: ${POSTHOG_HOST} TELEMETRY_ENABLED: ${TELEMETRY_ENABLED} NEXT_PUBLIC_USER_UUID: ${USER_UUID} NEXT_PUBLIC_POSTHOG_API_KEY: ${POSTHOG_API_KEY} NEXT_PUBLIC_POSTHOG_HOST: ${POSTHOG_HOST} NEXT_PUBLIC_TELEMETRY_ENABLED: ${TELEMETRY_ENABLED} EXPERIMENTAL_ENGINE_RUST_VERSION: ${EXPERIMENTAL_ENGINE_RUST_VERSION} WREN_PRODUCT_VERSION: ${WREN_PRODUCT_VERSION} expose: - 3000 ports: - "3000:3000" volumes: - data:/app/data networks: - wren - shihome_net depends_on: - wren-ai-service - wren-engine - ollama # labels: # - "traefik.enable=true" # - "traefik.http.routers.wren-ui.rule=Host(wren.curadoresurbanos.org)" # - "traefik.http.routers.wren-ui.entrypoints=web,websecure" # - "traefik.http.routers.wren-ui.tls.certresolver=leresolver" # - "traefik.http.services.wren-ui.loadbalancer.server.port=3000"

ollama: image: ollama/ollama container_name: ollama volumes: - ollama:/root/.ollama networks: - wren - shihome_net expose: - 11434 restart: unless-stopped entrypoint: ["/bin/bash", "-c"] command: - "(until ollama list; do sleep 1; done; ollama pull mxbai-embed-large) & exec ollama serve"

config.yaml

you should rename this file to config.yaml and put it in ~/.wrenai

please pay attention to the comments starting with # and adjust the config accordingly, 3 steps basically:

1. you need to use your own llm and embedding models

2. fill in embedding model dimension in the document_store section

3. you need to use the correct pipe definitions based on https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml

4. you need to fill in correct llm and embedding models in the pipe definitions

type: llm provider: litellm_llm models:

put GROQ_API_KEY=<your_api_key> in ~/.wrenai/.env

  • api_base: https://api.groq.com/openai/v1 model: groq/mistral-saba-24b # groq/<ollama_model_name> alias: default timeout: 120 kwargs: n: 1 temperature: 0

type: embedder provider: litellm_embedder models:

  • model: ollama/mxbai-embed-large # put your ollama embedder model name here, ollama/<ollama_model_name> alias: default api_base: http://ollama:11434 # if you are using mac/windows, don't change this; if you are using linux, please search "Run Ollama in docker container" in this page: https://docs.getwren.ai/oss/ai_service/guide/custom_llm#running-wren-ai-with-your-custom-llm-embedder timeout: 600

type: engine provider: wren_ui endpoint: http://wren-ui:3000

type: engine provider: wren_ibis endpoint: http://wren-ibis:8000

type: document_store provider: qdrant location: http://qdrant:6333 embedding_model_dim: 1024 # put your embedding model dimension here timeout: 120 recreate_index: true

please change the llm and embedder names to the ones you want to use

the format of llm and embedder should be .<model_name> such as litellm_llm.gpt-4o-2024-08-06

the pipes may be not the latest version, please refer to the latest version: https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml

type: pipeline pipes:

  • name: db_schema_indexing embedder: litellm_embedder.default document_store: qdrant
  • name: historical_question_indexing embedder: litellm_embedder.default document_store: qdrant
  • name: table_description_indexing embedder: litellm_embedder.default document_store: qdrant
  • name: db_schema_retrieval llm: litellm_llm.default embedder: litellm_embedder.default document_store: qdrant
  • name: historical_question_retrieval embedder: litellm_embedder.default document_store: qdrant
  • name: sql_generation llm: litellm_llm.default engine: wren_ui
  • name: sql_correction llm: litellm_llm.default engine: wren_ui
  • name: followup_sql_generation llm: litellm_llm.default engine: wren_ui
  • name: sql_summary llm: litellm_llm.default
  • name: sql_answer llm: litellm_llm.default
  • name: sql_breakdown llm: litellm_llm.default engine: wren_ui
  • name: sql_expansion llm: litellm_llm.default engine: wren_ui
  • name: semantics_description llm: litellm_llm.default
  • name: relationship_recommendation llm: litellm_llm.default engine: wren_ui
  • name: question_recommendation llm: litellm_llm.default
  • name: question_recommendation_db_schema_retrieval llm: litellm_llm.default embedder: litellm_embedder.default document_store: qdrant
  • name: question_recommendation_sql_generation llm: litellm_llm.default engine: wren_ui
  • name: chart_generation llm: litellm_llm.default
  • name: chart_adjustment llm: litellm_llm.default
  • name: intent_classification llm: litellm_llm.default embedder: litellm_embedder.default document_store: qdrant
  • name: misleading_assistance llm: litellm_llm.default
  • name: data_assistance llm: litellm_llm.default
  • name: sql_pairs_indexing document_store: qdrant embedder: litellm_embedder.default
  • name: sql_pairs_retrieval document_store: qdrant embedder: litellm_embedder.default llm: litellm_llm.default
  • name: preprocess_sql_data llm: litellm_llm.default
  • name: sql_executor engine: wren_ui
  • name: user_guide_assistance llm: litellm_llm.default
  • name: sql_question_generation llm: litellm_llm.default
  • name: sql_generation_reasoning llm: litellm_llm.default
  • name: followup_sql_generation_reasoning llm: litellm_llm.default
  • name: sql_regeneration llm: litellm_llm.default engine: wren_ui
  • name: instructions_indexing embedder: litellm_embedder.default document_store: qdrant
  • name: instructions_retrieval embedder: litellm_embedder.default document_store: qdrant
  • name: sql_functions_retrieval engine: wren_ibis document_store: qdrant
  • name: project_meta_indexing document_store: qdrant

settings: doc_endpoint: https://docs.getwren.ai is_oss: true engine_timeout: 30 column_indexing_batch_size: 50 table_retrieval_size: 10 table_column_retrieval_size: 100 allow_intent_classification: true allow_sql_generation_reasoning: true allow_sql_functions_retrieval: true enable_column_pruning: false max_sql_correction_retries: 3 query_cache_maxsize: 1000 query_cache_ttl: 3600 langfuse_host: https://cloud.langfuse.com langfuse_enable: true logging_level: DEBUG development: false historical_question_retrieval_similarity_threshold: 0.9 sql_pairs_similarity_threshold: 0.7 sql_pairs_retrieval_max_size: 10 instructions_similarity_threshold: 0.7 instructions_top_k: 10

.env file

COMPOSE_PROJECT_NAME=wrenai PLATFORM=linux/amd64

PROJECT_DIR=.

service port

WREN_ENGINE_PORT=8080 WREN_ENGINE_SQL_PORT=7432 WREN_AI_SERVICE_PORT=5555 WREN_UI_PORT=3000 IBIS_SERVER_PORT=8000 WREN_UI_ENDPOINT=http://wren-ui:${WREN_UI_PORT}

ai service settings

QDRANT_HOST=qdrant SHOULD_FORCE_DEPLOY=1

vendor keys

OPENAI_API_KEY=

GROQ_API_KEY=myAPIKEY

version

CHANGE THIS TO THE LATEST VERSION

WREN_PRODUCT_VERSION=0.25.0 WREN_ENGINE_VERSION=0.17.1 WREN_AI_SERVICE_VERSION=0.24.3 IBIS_SERVER_VERSION=0.17.1 WREN_UI_VERSION=0.30.0 WREN_BOOTSTRAP_VERSION=0.1.5

user id (uuid v4)

USER_UUID=

for other services

POSTHOG_API_KEY=phc_nhF32aj4xHXdOZb0oqr2cn4Oy9uiWzz6CCP4KZmRq9aE POSTHOG_HOST=https://app.posthog.com TELEMETRY_ENABLED=true

this is for telemetry to know the model, i think ai-service might be able to provide a endpoint to get the information

GENERATION_MODEL=gpt-4o-mini LANGFUSE_SECRET_KEY= LANGFUSE_PUBLIC_KEY=

the port exposes to the host

OPTIONAL: change the port if you have a conflict

HOST_PORT=3030 AI_SERVICE_FORWARD_PORT=5555

Wren UI

EXPERIMENTAL_ENGINE_RUST_VERSION=false

please make sure you are using the latest version of config.yaml

https://github.com/Canner/WrenAI/blob/main/docker/config.example.yaml

cyyeh avatar Jul 24 '25 11:07 cyyeh