WrenAI
WrenAI copied to clipboard
When the question-answering phase is at "Generating SQL statement", if Docker prompts an SQL error, the UI interface will remain stuck on the "Generating" screen indefinitely.
Describe the bug
Sometimes the generated SQL is not correct, which leads to the following situation:
When the question-answer stage reaches:
At this point, if Docker prompts an error:
ibis-server | 2025-06-03 14:45:02.387 | 2025-06-03 06:45:02.384 | [7e9321a2-1c10-4677-a8d3-f4d44395cb4e] | ERROR | main.custom_http_error_handler:84 - Request failed
ibis-server | 2025-06-03 14:45:02.387 | Traceback (most recent call last):
ibis-server | 2025-06-03 14:45:02.387 | File "/app/mdl/rewriter.py", line 119, in rewrite
ibis-server | 2025-06-03 14:45:02.387 | return await to_thread.run_sync(session_context.transform_sql, sql)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
ibis-server | 2025-06-03 14:45:02.387 | return await get_async_backend().run_sync_in_worker_thread(
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2470, in run_sync_in_worker_thread
ibis-server | 2025-06-03 14:45:02.387 | return await future
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 967, in run
ibis-server | 2025-06-03 14:45:02.387 | result = context.run(func, *args)
ibis-server | 2025-06-03 14:45:02.387 | Exception: DataFusion error: Error during planning: Invalid function 'format'.
ibis-server | 2025-06-03 14:45:02.387 | Did you mean 'concat'?
ibis-server | 2025-06-03 14:45:02.387 |
ibis-server | 2025-06-03 14:45:02.387 | During handling of the above exception, another exception occurred:
ibis-server | 2025-06-03 14:45:02.387 |
ibis-server | 2025-06-03 14:45:02.387 | Traceback (most recent call last):
ibis-server | 2025-06-03 14:45:02.387 | File "/app/routers/v3/connector.py", line 74, in query
ibis-server | 2025-06-03 14:45:02.387 | rewritten_sql = await Rewriter(
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/opentelemetry/util/_decorator.py", line 71, in async_wrapper
ibis-server | 2025-06-03 14:45:02.387 | return await func(*args, **kwargs) # type: ignore
ibis-server | 2025-06-03 14:45:02.387 | File "/app/mdl/rewriter.py", line 57, in rewrite
ibis-server | 2025-06-03 14:45:02.387 | planned_sql = await self._rewriter.rewrite(manifest_str, sql)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/opentelemetry/util/_decorator.py", line 71, in async_wrapper
ibis-server | 2025-06-03 14:45:02.387 | return await func(*args, **kwargs) # type: ignore
ibis-server | 2025-06-03 14:45:02.387 | File "/app/mdl/rewriter.py", line 121, in rewrite
ibis-server | 2025-06-03 14:45:02.387 | raise RewriteError(str(e))
ibis-server | 2025-06-03 14:45:02.387 | app.mdl.rewriter.RewriteError: DataFusion error: Error during planning: Invalid function 'format'.
ibis-server | 2025-06-03 14:45:02.387 | Did you mean 'concat'?
ibis-server | 2025-06-03 14:45:02.387 |
ibis-server | 2025-06-03 14:45:02.387 | During handling of the above exception, another exception occurred:
ibis-server | 2025-06-03 14:45:02.387 |
ibis-server | 2025-06-03 14:45:02.387 | Traceback (most recent call last):
ibis-server | 2025-06-03 14:45:02.387 | File "/app/model/connector.py", line 61, in dry_run
ibis-server | 2025-06-03 14:45:02.387 | self._connector.dry_run(sql)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/model/connector.py", line 86, in dry_run
ibis-server | 2025-06-03 14:45:02.387 | super().dry_run(sql)
ibis-server | 2025-06-03 14:45:02.387 | File "/usr/local/lib/python3.11/contextlib.py", line 81, in inner
ibis-server | 2025-06-03 14:45:02.387 | return func(*args, **kwds)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/model/connector.py", line 77, in dry_run
ibis-server | 2025-06-03 14:45:02.387 | self.connection.sql(sql)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/ibis/backends/sql/__init__.py", line 179, in sql
ibis-server | 2025-06-03 14:45:02.387 | schema = self._get_schema_using_query(query)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/ibis/backends/mssql/__init__.py", line 358, in _get_schema_using_query
ibis-server | 2025-06-03 14:45:02.387 | raise com.IbisInputError(f".sql failed with message: {err_msg}")
ibis-server | 2025-06-03 14:45:02.387 | ibis.common.exceptions.IbisInputError: .sql failed with message: “,”附近有语法错误。
ibis-server | 2025-06-03 14:45:02.387 |
ibis-server | 2025-06-03 14:45:02.387 | During handling of the above exception, another exception occurred:
ibis-server | 2025-06-03 14:45:02.387 |
ibis-server | 2025-06-03 14:45:02.387 | Traceback (most recent call last):
ibis-server | 2025-06-03 14:45:02.387 | > File "/app/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
ibis-server | 2025-06-03 14:45:02.387 | await app(scope, receive, sender)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
ibis-server | 2025-06-03 14:45:02.387 | response = await f(request)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
ibis-server | 2025-06-03 14:45:02.387 | raw_response = await run_endpoint_function(
ibis-server | 2025-06-03 14:45:02.387 | File "/app/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
ibis-server | 2025-06-03 14:45:02.387 | return await dependant.call(**values)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/routers/v3/connector.py", line 155, in query
ibis-server | 2025-06-03 14:45:02.387 | return await v2.connector.query(
ibis-server | 2025-06-03 14:45:02.387 | File "/app/routers/v2/connector.py", line 89, in query
ibis-server | 2025-06-03 14:45:02.387 | connector.dry_run(rewritten_sql)
ibis-server | 2025-06-03 14:45:02.387 | File "/app/model/connector.py", line 63, in dry_run
ibis-server | 2025-06-03 14:45:02.387 | raise QueryDryRunError(f"Exception: {type(e)}, message: {e!s}")
ibis-server | 2025-06-03 14:45:02.387 | app.model.connector.QueryDryRunError: Exception: <class 'ibis.common.exceptions.IbisInputError'>, message: .sql failed with message: “,”附近有语法错误。
ibis-server | 2025-06-03 14:45:02.388 | 2025-06-03 06:45:02.386 | [7e9321a2-1c10-4677-a8d3-f4d44395cb4e] | INFO | __init__.dispatch:29 - Request ended
ibis-server | 2025-06-03 14:45:02.388 | INFO 172.20.0.6:39926 - "POST /v3/connector/mssql/query?dryRun=true
ibis-server | 2025-06-03 14:45:02.388 | HTTP/1.1" 422
wren-ui | 2025-06-03 14:45:02.393 | [2025-06-03T06:45:02.393] [DEBUG] IbisAdaptor - Dry run error: Exception: <class 'ibis.common.exceptions.IbisInputError'>, message: .sql failed with message: “,”附近有语法错误。
wren-ui | 2025-06-03 14:45:02.394 | [2025-06-03T06:45:02.394] [ERROR] APOLLO - == original error ==
wren-ui | 2025-06-03 14:45:02.394 | [2025-06-03T06:45:02.394] [ERROR] APOLLO - AxiosError: Request failed with status code 422
wren-ui | 2025-06-03 14:45:02.394 | at settle (file:///app/node_modules/axios/lib/core/settle.js:19:12)
wren-ui | 2025-06-03 14:45:02.394 | at IncomingMessage.handleStreamEnd (file:///app/node_modules/axios/lib/adapters/http.js:599:11)
wren-ui | 2025-06-03 14:45:02.394 | at IncomingMessage.emit (node:events:529:35)
wren-ui | 2025-06-03 14:45:02.394 | at endReadableNT (node:internal/streams/readable:1400:12)
wren-ui | 2025-06-03 14:45:02.394 | at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
wren-ui | 2025-06-03 14:45:02.394 | at Axios.request (file:///app/node_modules/axios/lib/core/Axios.js:45:41)
wren-ui | 2025-06-03 14:45:02.394 | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
wren-ui | 2025-06-03 14:45:02.394 | at async f.dryRun (/app/.next/server/chunks/980.js:1:2724)
wren-ui | 2025-06-03 14:45:02.394 | at async c.ibisDryRun (/app/.next/server/chunks/980.js:8:43759)
wren-ui | 2025-06-03 14:45:02.394 | at async c.preview (/app/.next/server/chunks/980.js:8:43184)
wren-ui | 2025-06-03 14:45:02.394 | at async N.previewSql (/app/.next/server/pages/api/graphql.js:1:85134)
wren-ai-service | 2025-06-03 14:45:02.395 | E0603 06:45:02.394 8 wren-ai-service:69] Error executing SQL: Exception: <class 'ibis.common.exceptions.IbisInputError'>, message: .sql failed with message: “,”附近有语法错误。
wren-ai-service | 2025-06-03 14:45:02.395 | E0603 06:45:02.394 8 wren-ai-service:69] Error executing SQL: Exception: <class 'ibis.common.exceptions.IbisInputError'>, message: .sql failed with message: “,”附近有语法错误。
wren-ai-service | 2025-06-03 14:45:02.436 | I0603 06:45:02.436 8 wren-ai-service:133] SQLCorrection pipeline is running...
wren-ai-service | 2025-06-03 14:45:02.436 | I0603 06:45:02.436 8 wren-ai-service:133] SQLCorrection pipeline is running...
wren-ui | 2025-06-03 14:45:03.041 | [2025-06-03T06:45:03.041] [INFO] AskingTaskTracker - Polling for updates for task d7893822-e003-4a95-beca-c6a5d253f320
wren-ai-service | 2025-06-03 14:45:03.043 | INFO: 172.20.0.6:34712 - "GET /v1/asks/d7893822-e003-4a95-beca-c6a5d253f320/result HTTP/1.1" 200 OK
wren-ai-service | 2025-06-03 14:45:03.043 | INFO: 172.20.0.6:34712 - "GET /v1/asks/d7893822-e003-4a95-beca-c6a5d253f320/result HTTP/1.1" 200 OK
wren-ui | 2025-06-03 14:45:03.044 | [2025-06-03T06:45:03.044] [INFO] AskingTaskTracker - Updating task d7893822-e003-4a95-beca-c6a5d253f320 in database
wren-ai-service | 2025-06-03 14:45:03.647 | WARNING: Invalid HTTP request received.
wren-ai-service | 2025-06-03 14:45:03.647 | WARNING: Invalid HTTP request received.
wren-ui | 2025-06-03 14:45:04.041 | [2025-06-03T06:45:04.040] [INFO] AskingTaskTracker - Polling for updates for task d7893822-e003-4a95-beca-c6a5d253f320
wren-ai-service | 2025-06-03 14:45:04.043 | INFO: 172.20.0.6:34714 - "GET /v1/asks/d7893822-e003-4a95-beca-c6a5d253f320/result HTTP/1.1" 200 OK
wren-ai-service | 2025-06-03 14:45:04.043 | INFO: 172.20.0.6:34714 - "GET /v1/asks/d7893822-e003-4a95-beca-c6a5d253f320/result HTTP/1.1" 200 OK
wren-ai-service | 2025-06-03 14:45:04.915 | WARNING: Invalid HTTP request received.
wren-ai-service | 2025-06-03 14:45:04.915 | WARNING: Invalid HTTP request received.
wren-ui | 2025-06-03 14:45:05.042 | [2025-06-03T06:45:05.041] [INFO] AskingTaskTracker - Polling for updates for task d7893822-e003-4a95-beca-c6a5d253f320
wren-ai-service | 2025-06-03 14:45:05.044 | INFO: 172.20.0.6:34724 - "GET /v1/asks/d7893822-e003-4a95-beca-c6a5d253f320/result HTTP/1.1" 200 OK
wren-ai-service | 2025-06-03 14:45:05.044 | INFO: 172.20.0.6:34724 - "GET /v1/asks/d7893822-e003-4a95-beca-c6a5d253f320/result HTTP/1.1" 200 OK
wren-ui | 2025-06-03 14:45:06.041 | [2025-06-03T06:45:06.041] [INFO] AskingTaskTracker - Polling for updates for task d7893822-e003-4a95-beca-c6a5d253f320
The UI interface will remain stuck on the "Generating" screen indefinitely.
Expected behavior The system should alert the user when SQL generation fails and allow them to edit the SQL or adjust prior steps.
Desktop (please complete the following information):
- OS: Windows 11
- Browser Edge
Wren AI Information WREN_PRODUCT_VERSION=0.22.2 WREN_ENGINE_VERSION=0.15.13 WREN_AI_SERVICE_VERSION=0.22.4 IBIS_SERVER_VERSION=0.15.13 WREN_UI_VERSION=0.27.4 WREN_BOOTSTRAP_VERSION=0.1.5
- Please share
config.yamlwith us, it should be located at~/.wrenai/config.yaml.
# you should rename this file to config.yaml and put it in ~/.wrenai
# please pay attention to the comments starting with # and adjust the config accordingly, 3 steps basically:
# 1. you need to use your own llm and embedding models
# 2. fill in embedding model dimension in the document_store section
# 3. you need to use the correct pipe definitions based on https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
# 4. you need to fill in correct llm and embedding models in the pipe definitions
type: llm
provider: litellm_llm
models:
# put DEEPSEEK_API_KEY=<your_api_key> in ~/.wrenai/.env
- api_base: https://api.deepseek.com/v1
model: deepseek/deepseek-reasoner
timeout: 120
kwargs:
n: 1
temperature: 0
response_format:
type: text
- api_base: https://api.deepseek.com/v1
model: deepseek/deepseek-chat
timeout: 120
kwargs:
n: 1
temperature: 0
response_format:
type: text
- api_base: https://api.deepseek.com/v1
model: deepseek/deepseek-coder
alias: default
timeout: 120
kwargs:
n: 1
temperature: 0
response_format:
type: json_object
---
type: embedder
provider: litellm_embedder
models:
# define OPENAI_API_KEY=<api_key> in ~/.wrenai/.env if you are using openai embedding model
# please refer to LiteLLM documentation for more details: https://docs.litellm.ai/docs/providers
- model: ollama/nomic-embed-text:latest # put your embedding model name here, if it is not openai embedding model, should be <provider>/<model_name>
alias: default
api_base: http://host.docker.internal:11434 # change this according to your embedding model
timeout: 120
---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000
---
type: engine
provider: wren_ibis
endpoint: http://wren-ibis:8000
---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 768 # put your embedding model dimension here
timeout: 120
recreate_index: true
---
# please change the llm and embedder names to the ones you want to use
# the format of llm and embedder should be <provider>.<model_name> such as litellm_llm.gpt-4o-2024-08-06
# the pipes may be not the latest version, please refer to the latest version: https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
type: pipeline
pipes:
- name: db_schema_indexing
embedder: litellm_embedder.default
document_store: qdrant
- name: historical_question_indexing
embedder: litellm_embedder.default
document_store: qdrant
- name: table_description_indexing
embedder: litellm_embedder.default
document_store: qdrant
- name: db_schema_retrieval
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant
- name: historical_question_retrieval
embedder: litellm_embedder.default
document_store: qdrant
- name: sql_generation
llm: litellm_llm.default
engine: wren_ui
- name: sql_correction
llm: litellm_llm.default
engine: wren_ui
- name: followup_sql_generation
llm: litellm_llm.default
engine: wren_ui
- name: sql_answer
llm: litellm_llm.deepseek/deepseek-chat
- name: semantics_description
llm: litellm_llm.default
- name: relationship_recommendation
llm: litellm_llm.default
engine: wren_ui
- name: question_recommendation
llm: litellm_llm.default
- name: question_recommendation_db_schema_retrieval
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant
- name: question_recommendation_sql_generation
llm: litellm_llm.default
engine: wren_ui
- name: chart_generation
llm: litellm_llm.default
- name: chart_adjustment
llm: litellm_llm.default
- name: intent_classification
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant
- name: misleading_assistance
llm: litellm_llm.default
- name: data_assistance
llm: litellm_llm.deepseek/deepseek-chat
- name: sql_pairs_indexing
document_store: qdrant
embedder: litellm_embedder.default
- name: sql_pairs_retrieval
document_store: qdrant
embedder: litellm_embedder.default
llm: litellm_llm.default
- name: preprocess_sql_data
llm: litellm_llm.default
- name: sql_executor
engine: wren_ui
- name: user_guide_assistance
llm: litellm_llm.default
- name: sql_question_generation
llm: litellm_llm.default
- name: sql_generation_reasoning
llm: litellm_llm.deepseek/deepseek-reasoner
- name: followup_sql_generation_reasoning
llm: litellm_llm.deepseek/deepseek-reasoner
- name: sql_regeneration
llm: litellm_llm.default
engine: wren_ui
- name: instructions_indexing
embedder: litellm_embedder.default
document_store: qdrant
- name: instructions_retrieval
embedder: litellm_embedder.default
document_store: qdrant
- name: sql_functions_retrieval
engine: wren_ibis
document_store: qdrant
- name: project_meta_indexing
document_store: qdrant
- name: sql_tables_extraction
llm: litellm_llm.default
---
settings:
engine_timeout: 30
column_indexing_batch_size: 50
table_retrieval_size: 30
table_column_retrieval_size: 100
allow_intent_classification: true
allow_sql_generation_reasoning: true
allow_sql_functions_retrieval: true
enable_column_pruning: false
max_sql_correction_retries: 3
query_cache_maxsize: 1000
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: true
historical_question_retrieval_similarity_threshold: 0.9
sql_pairs_similarity_threshold: 0.7
sql_pairs_retrieval_max_size: 10
instructions_similarity_threshold: 0.7
instructions_top_k: 10