KeyError: 'semantics_description'
My service has been reporting errors all the time. KeyError: 'semantics_description' How to solve it?
.env: COMPOSE_PROJECT_NAME=wrenai PLATFORM=linux/amd64
PROJECT_DIR=.
service port
WREN_ENGINE_PORT=8080 WREN_ENGINE_SQL_PORT=7432 WREN_AI_SERVICE_PORT=5555 WREN_UI_PORT=3000 IBIS_SERVER_PORT=8000 WREN_UI_ENDPOINT=http://wren-ui:${WREN_UI_PORT}
ai service settings
QDRANT_HOST=qdrant SHOULD_FORCE_DEPLOY=1
vendor keys
LLM_OPENAI_API_KEY= EMBEDDER_OPENAI_API_KEY= LLM_AZURE_OPENAI_API_KEY= EMBEDDER_AZURE_OPENAI_API_KEY= QDRANT_API_KEY= OPENAI_API_KEY= LLM_OLLAMA_API_KEY=2222222 EMBEDDER_OLLAMA_API_KEY=11111
version
CHANGE THIS TO THE LATEST VERSION
WREN_PRODUCT_VERSION=0.15.3 WREN_ENGINE_VERSION=0.13.1 WREN_AI_SERVICE_VERSION=0.15.7 IBIS_SERVER_VERSION=0.13.1 WREN_UI_VERSION=0.20.1 WREN_BOOTSTRAP_VERSION=0.1.5
user id (uuid v4)
USER_UUID=
for other services
POSTHOG_API_KEY=phc_nhF32aj4xHXOZb0oqr2cn4Oy9uiWzz6CCP4KZmRq9aE POSTHOG_HOST=https://app.posthog.com TELEMETRY_ENABLED=true
this is for telemetry to know the model, i think ai-service might be able to provide a endpoint to get the information
GENERATION_MODEL=gpt-4o-mini LANGFUSE_SECRET_KEY= LANGFUSE_PUBLIC_KEY=
the port exposes to the host
OPTIONAL: change the port if you have a conflict
HOST_PORT=3000 AI_SERVICE_FORWARD_PORT=5555
Wren UI
EXPERIMENTAL_ENGINE_RUST_VERSION=false
Hi @chantonmm, check the comment in #1333. There are the similar issues people reported.
the comment in https://github.com/Canner/WrenAI/issues/1333 also didnt mention how to solve. So how to solve it?
请问问题解决了吗,我也是这个报错
请问问题解决了吗,我也是这个报错
Please try to use the latest config examples and the latest version of Wren AI
Even I am facing the same issue. Any solution yet?