WrenAI icon indicating copy to clipboard operation
WrenAI copied to clipboard

wrenai本地接入deepseek模型部署报错,wren-ai-service 服务启动报错,另外wren-bootstrap服务启动不成功

Open XiaoWuGg opened this issue 9 months ago • 2 comments

@wwwy3y3 @chilijung @qdrddr @kpman @tedyyan @cyyeh

运行状态信息:

Image

Image

Image

Image

Image

wren-ai-service:0.18.1服务报错日志如下:

ERROR: Application startup failed. Exiting. Timeout: wren-ai-service did not start within 60 seconds Waiting for qdrant to start... qdrant has started. Waiting for wren-ai-service to start... INFO: Started server process [7] INFO: Waiting for application startup. I0408 06:48:44.397 7 wren-ai-service:40] Imported Provider: src.providers.document_store I0408 06:48:44.401 7 wren-ai-service:64] Registering provider: qdrant I0408 06:48:44.401 7 wren-ai-service:40] Imported Provider: src.providers.document_store.qdrant I0408 06:48:44.402 7 wren-ai-service:40] Imported Provider: src.providers.embedder I0408 06:48:45.618 7 wren-ai-service:64] Registering provider: azure_openai_embedder I0408 06:48:45.618 7 wren-ai-service:40] Imported Provider: src.providers.embedder.azure_openai I0408 06:48:49.034 7 wren-ai-service:64] Registering provider: litellm_embedder I0408 06:48:49.034 7 wren-ai-service:40] Imported Provider: src.providers.embedder.litellm I0408 06:48:49.038 7 wren-ai-service:64] Registering provider: ollama_embedder I0408 06:48:49.038 7 wren-ai-service:40] Imported Provider: src.providers.embedder.ollama I0408 06:48:49.040 7 wren-ai-service:64] Registering provider: openai_embedder I0408 06:48:49.040 7 wren-ai-service:40] Imported Provider: src.providers.embedder.openai I0408 06:48:49.041 7 wren-ai-service:40] Imported Provider: src.providers.engine I0408 06:48:49.041 7 wren-ai-service:40] Imported Provider: src.providers.engine.wren I0408 06:48:49.042 7 wren-ai-service:40] Imported Provider: src.providers.llm I0408 06:48:49.061 7 wren-ai-service:64] Registering provider: azure_openai_llm I0408 06:48:49.061 7 wren-ai-service:40] Imported Provider: src.providers.llm.azure_openai I0408 06:48:49.062 7 wren-ai-service:64] Registering provider: litellm_llm I0408 06:48:49.062 7 wren-ai-service:40] Imported Provider: src.providers.llm.litellm I0408 06:48:49.066 7 wren-ai-service:64] Registering provider: ollama_llm I0408 06:48:49.066 7 wren-ai-service:40] Imported Provider: src.providers.llm.ollama I0408 06:48:49.180 7 wren-ai-service:64] Registering provider: openai_llm I0408 06:48:49.181 7 wren-ai-service:40] Imported Provider: src.providers.llm.openai I0408 06:48:49.181 7 wren-ai-service:40] Imported Provider: src.providers.loader I0408 06:48:49.181 7 wren-ai-service:15] initializing provider: litellm_embedder I0408 06:48:49.181 7 wren-ai-service:91] Getting provider: litellm_embedder from {'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'litellm_embedder': <class 'src.providers.embedder.litellm.LitellmEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>} I0408 06:48:49.182 7 wren-ai-service:176] Initializing LitellmEmbedder provider with API base: http://192.168.40.179:11434 I0408 06:48:49.182 7 wren-ai-service:179] Using Embedding Model: ollama/bge-m3:latest I0408 06:48:49.182 7 wren-ai-service:15] initializing provider: litellm_llm I0408 06:48:49.182 7 wren-ai-service:91] Getting provider: litellm_llm from {'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'litellm_embedder': <class 'src.providers.embedder.litellm.LitellmEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>} I0408 06:48:49.182 7 wren-ai-service:15] initializing provider: qdrant I0408 06:48:49.182 7 wren-ai-service:91] Getting provider: qdrant from {'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'litellm_embedder': <class 'src.providers.embedder.litellm.LitellmEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>} I0408 06:48:49.182 7 wren-ai-service:405] Using Qdrant Document Store with Embedding Model Dimension: 1024 I0408 06:48:49.670 7 wren-ai-service:405] Using Qdrant Document Store with Embedding Model Dimension: 1024 I0408 06:48:50.145 7 wren-ai-service:405] Using Qdrant Document Store with Embedding Model Dimension: 1024 I0408 06:48:50.648 7 wren-ai-service:405] Using Qdrant Document Store with Embedding Model Dimension: 1024 I0408 06:48:51.174 7 wren-ai-service:15] initializing provider: wren_ui I0408 06:48:51.175 7 wren-ai-service:91] Getting provider: wren_ui from {'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'litellm_embedder': <class 'src.providers.embedder.litellm.LitellmEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>} I0408 06:48:51.175 7 wren-ai-service:24] Using Engine: wren_ui I0408 06:48:51.175 7 wren-ai-service:15] initializing provider: wren_ibis I0408 06:48:51.176 7 wren-ai-service:91] Getting provider: wren_ibis from {'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'litellm_embedder': <class 'src.providers.embedder.litellm.LitellmEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>} I0408 06:48:51.176 7 wren-ai-service:101] Using Engine: wren_ibis ERROR: Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 692, in lifespan async with self.lifespan_context(app) as maybe_state: File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan async with original_context(app) as maybe_original_state: File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/src/main.py", line 32, in lifespan app.state.service_container = create_service_container(pipe_components, settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/globals.py", line 50, in create_service_container "semantics_description": generation.SemanticsDescription( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/pipelines/generation/semantics_description.py", line 213, in init "generator": llm_provider.get_generator( ^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'get_generator'

ERROR: Application startup failed. Exiting.

env环境变量如下:

COMPOSE_PROJECT_NAME=wrenai PLATFORM=linux/amd64

PROJECT_DIR=.

service port

WREN_ENGINE_PORT=8080 WREN_ENGINE_SQL_PORT=7432 WREN_AI_SERVICE_PORT=5555 WREN_UI_PORT=3000 IBIS_SERVER_PORT=8000 WREN_UI_ENDPOINT=http://wren-ui:${WREN_UI_PORT}

ai service settings

QDRANT_HOST=qdrant SHOULD_FORCE_DEPLOY=1

vendor keys

OPENAI_API_KEY=

version

CHANGE THIS TO THE LATEST VERSION

WREN_PRODUCT_VERSION=0.18.0-rc.2 WREN_ENGINE_VERSION=0.14.8 WREN_AI_SERVICE_VERSION=0.18.1 IBIS_SERVER_VERSION=0.14.8 WREN_UI_VERSION=0.23.2 WREN_BOOTSTRAP_VERSION=0.1.5

user id (uuid v4)

USER_UUID=

for other services

POSTHOG_API_KEY=phc_nhF32aj4xHXOZb0oqr2cn4Oy9uiWzz6CCP4KZmRq9aE POSTHOG_HOST=https://app.posthog.com TELEMETRY_ENABLED=true

this is for telemetry to know the model, i think ai-service might be able to provide a endpoint to get the information

GENERATION_MODEL=gpt-4o-mini LANGFUSE_SECRET_KEY= LANGFUSE_PUBLIC_KEY=

the port exposes to the host

OPTIONAL: change the port if you have a conflict

HOST_PORT=3000 AI_SERVICE_FORWARD_PORT=5555

Wren UI

EXPERIMENTAL_ENGINE_RUST_VERSION=false EMBEDDER_OLLAMA_URL=http://192.168.40.179:11434

config配置文件如下,我是通过ollama运行的deepseek模型,ollama不是通过docker部署,通过编译安装的,用systemd来进行管理的(按照https://github.com/Canner/WrenAI/blob/release/ui/0.23.3/wren-ai-service/docs/config_examples/config.ollama.yaml示例进行修改):

type: llm provider: litellm_llm timeout: 120 models:

  • api_base: http://192.168.40.179:11434 # if you are using mac/windows, don't change this; if you are using linux, please search "Run Ollama in docker container" in this page: https://docs.getwren.ai/oss/ai_service/guide/custom_llm#running-wren-ai-with-your-custom-llm-embedder model: ollama_chat/deepseek-r1:7b # ollama_chat/<ollama_model_name> alias: default timeout: 600 kwargs: n: 1 temperature: 0

type: embedder provider: litellm_embedder models:

  • model: ollama/bge-m3:latest # put your ollama embedder model name here, ollama/<ollama_model_name> alias: default api_base: http://192.168.40.179:11434 # if you are using mac/windows, don't change this; if you are using linux, please search "Run Ollama in docker container" in this page: https://docs.getwren.ai/oss/ai_service/guide/custom_llm#running-wren-ai-with-your-custom-llm-embedder timeout: 600

type: engine provider: wren_ui endpoint: http://wren-ui:3000


type: engine provider: wren_ibis endpoint: http://wren-ibis:8000


type: document_store provider: qdrant location: http://qdrant:6333 embedding_model_dim: 1024 timeout: 120 recreate_index: true


type: pipeline pipes:

  • name: db_schema_indexing embedder: litellm_embedder.ollama/bge-m3:latest document_store: qdrant
  • name: historical_question_indexing embedder: litellm_embedder.ollama/bge-m3:latest document_store: qdrant
  • name: table_description_indexing embedder: litellm_embedder.ollama/bge-m3:latest document_store: qdrant
  • name: db_schema_retrieval llm: litellm_llm.ollama_chat/deepseek-r1:7b embedder: litellm_embedder.ollama/bge-m3:latest document_store: qdrant
  • name: historical_question_retrieval embedder: litellm_embedder.ollama/bge-m3:latest document_store: qdrant
  • name: sql_generation llm: litellm_llm.ollama_chat/deepseek-r1:7b engine: wren_ui
  • name: sql_correction llm: litellm_llm.ollama_chat/deepseek-r1:7b engine: wren_ui
  • name: followup_sql_generation llm: litellm_llm.ollama_chat/deepseek-r1:7b engine: wren_ui
  • name: sql_summary llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: sql_answer llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: sql_breakdown llm: litellm_llm.ollama_chat/deepseek-r1:7b engine: wren_ui
  • name: sql_expansion llm: litellm_llm.ollama_chat/deepseek-r1:7b engine: wren_ui
  • name: semantics_description llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: relationship_recommendation llm: litellm_llm.ollama_chat/deepseek-r1:7b engine: wren_ui
  • name: question_recommendation llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: question_recommendation_db_schema_retrieval llm: litellm_llm.ollama_chat/deepseek-r1:7b embedder: litellm_embedder.ollama/bge-m3:latest document_store: qdrant
  • name: question_recommendation_sql_generation llm: litellm_llm.ollama_chat/deepseek-r1:7b engine: wren_ui
  • name: intent_classification llm: litellm_llm.ollama_chat/deepseek-r1:7b embedder: litellm_embedder.ollama/bge-m3:latest document_store: qdrant
  • name: data_assistance llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: sql_pairs_indexing document_store: qdrant embedder: litellm_embedder.ollama/bge-m3:latest
  • name: sql_pairs_retrieval document_store: qdrant embedder: litellm_embedder.ollama/bge-m3:latest llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: preprocess_sql_data llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: sql_executor engine: wren_ui
  • name: chart_generation llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: chart_adjustment llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: sql_question_generation llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: sql_generation_reasoning llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: followup_sql_generation_reasoning llm: litellm_llm.ollama_chat/deepseek-r1:7b
  • name: sql_regeneration llm: litellm_llm.ollama_chat/deepseek-r1:7b engine: wren_ui
  • name: instructions_indexing embedder: litellm_embedder.ollama/bge-m3:latest document_store: qdrant
  • name: instructions_retrieval embedder: litellm_embedder.ollama/bge-m3:latest document_store: qdrant
  • name: sql_functions_retrieval engine: wren_ibis document_store: qdrant
  • name: project_meta_indexing document_store: qdrant

settings: engine_timeout: 30 column_indexing_batch_size: 50 table_retrieval_size: 10 table_column_retrieval_size: 100 allow_intent_classification: true allow_sql_generation_reasoning: true allow_using_db_schemas_without_pruning: false query_cache_maxsize: 1000 query_cache_ttl: 3600 langfuse_host: https://cloud.langfuse.com langfuse_enable: true logging_level: DEBUG development: false historical_question_retrieval_similarity_threshold: 0.9 sql_pairs_similarity_threshold: 0.7 sql_pairs_retrieval_max_size: 10 instructions_similarity_threshold: 0.7 instructions_top_k: 10

希望有大佬可以帮忙处理问题。

XiaoWuGg avatar Apr 08 '25 07:04 XiaoWuGg

@XiaoWuGg could you use these service versions and try again?

WREN_ENGINE_VERSION=0.14.8 WREN_AI_SERVICE_VERSION=0.19.3 IBIS_SERVER_VERSION=0.14.8 WREN_UI_VERSION=0.24.1 WREN_BOOTSTRAP_VERSION=0.1.5

cyyeh avatar Apr 08 '25 08:04 cyyeh

如果ollama不是docker好像存在网络访问的问题,你试试在docker里面能不能正常访问 ollama的端口

kevin10355325345 avatar Nov 07 '25 10:11 kevin10355325345