WrenAI icon indicating copy to clipboard operation
WrenAI copied to clipboard

使用qwen3 llm和 qwen text-embedding-v3时,发生的运行时异常

Open we1s3 opened this issue 4 months ago • 21 comments

Describe the bug 使用qwen3 llm和 qwen text-embedding-v3时,发生的运行时异常,在提问时候点击send后,客户端显示“Failed to create asking task.”

config.yaml

type: llm
provider: litellm_llm
models:
  # 通义千问Qwen-Max - 禁用思考模式
  - api_base: https://dashscope.aliyuncs.com/compatible-mode/v1
    model: dashscope/qwen-plus
    alias: default
    timeout: 900
    kwargs:
      n: 1
      temperature: 0.1
      top_p: 0.8
      #allowed_openai_params: ["extra_headers"]
      #extra_headers:
        #X-DashScope-Async: "false"
  
  # 通义千问Qwen-Plus - 快速模式
  - api_base: https://dashscope.aliyuncs.com/compatible-mode/v1
    model: dashscope/qwen-plus
    alias: qwen-fast
    timeout: 900
    kwargs:
      n: 1
      temperature: 0.1
      top_p: 0.8
      #allowed_openai_params: ["extra_headers"]
      #extra_headers:
        #X-DashScope-Async: "false"
  
  # 通义千问Qwen-Max - 启用思考模式处理复杂任务
  - api_base: https://dashscope.aliyuncs.com/compatible-mode/v1
    model: dashscope/qwen-plus
    alias: qwen-thinking
    timeout: 1200  # 思考模式需要更长超时时间
    kwargs:
      n: 1
      temperature: 0.3
      top_p: 0.9
      #allowed_openai_params: ["extra_headers"]
      #extra_headers:
        #X-DashScope-Async: "false"
  
  # 通义千问Qwen-Max - JSON响应模式
  - api_base: https://dashscope.aliyuncs.com/compatible-mode/v1
    model: dashscope/qwen-plus
    alias: qwen-json
    timeout: 900
    kwargs:
      n: 1
      temperature: 0.05
      top_p: 0.7
      response_format:
        type: json_object
      #allowed_openai_params: ["extra_headers"]
      #extra_headers:
        #X-DashScope-Async: "false"

---
type: embedder
provider: litellm_embedder
models:
  - model: dashscope/text-embedding-v3
    alias: default
    api_base: https://dashscope.aliyuncs.com/compatible-mode/v1
    timeout: 120
    kwargs:
      #allowed_openai_params: ["extra_headers"]
      #extra_headers:
        #X-DashScope-Async: "false"

---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000

---
type: engine
provider: wren_ibis
endpoint: http://ibis-server:8000

---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 1024
timeout: 120
recreate_index: true

---
# 使用阿里云模型替换原有模型
# 格式为<provider>.<model_alias>
type: pipeline
pipes:
  - name: db_schema_indexing
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: historical_question_indexing
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: table_description_indexing
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: db_schema_retrieval
    llm: litellm_llm.default
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: historical_question_retrieval
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: sql_generation
    llm: litellm_llm.default
    engine: wren_ui
    document_store: qdrant
  - name: sql_correction
    llm: litellm_llm.default
    engine: wren_ui
    document_store: qdrant
  - name: followup_sql_generation
    llm: litellm_llm.default
    engine: wren_ui
    document_store: qdrant
  - name: sql_answer
    llm: litellm_llm.qwen-fast
  - name: semantics_description
    llm: litellm_llm.default
  - name: relationship_recommendation
    llm: litellm_llm.default
    engine: wren_ui
  - name: question_recommendation
    llm: litellm_llm.default
  - name: question_recommendation_db_schema_retrieval
    llm: litellm_llm.default
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: question_recommendation_sql_generation
    llm: litellm_llm.default
    engine: wren_ui
    document_store: qdrant
  - name: chart_generation
    llm: litellm_llm.default
  - name: chart_adjustment
    llm: litellm_llm.default
  - name: intent_classification
    llm: litellm_llm.default
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: misleading_assistance
    llm: litellm_llm.default
  - name: data_assistance
    llm: litellm_llm.qwen-fast
  - name: sql_pairs_indexing
    document_store: qdrant
    embedder: litellm_embedder.default
  - name: sql_pairs_retrieval
    document_store: qdrant
    embedder: litellm_embedder.default
    llm: litellm_llm.default
  - name: preprocess_sql_data
    llm: litellm_llm.default
  - name: sql_executor
    engine: wren_ui
  - name: user_guide_assistance
    llm: litellm_llm.default
  - name: sql_question_generation
    llm: litellm_llm.default
  - name: sql_generation_reasoning
    llm: litellm_llm.qwen-thinking
  - name: followup_sql_generation_reasoning
    llm: litellm_llm.qwen-thinking
  - name: sql_regeneration
    llm: litellm_llm.default
    engine: wren_ui
  - name: instructions_indexing
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: instructions_retrieval
    embedder: litellm_embedder.default
    document_store: qdrant
  - name: sql_functions_retrieval
    engine: wren_ibis
    document_store: qdrant
  - name: project_meta_indexing
    document_store: qdrant
  - name: sql_tables_extraction
    llm: litellm_llm.default

---
settings:
  engine_timeout: 30
  column_indexing_batch_size: 50
  table_retrieval_size: 10
  table_column_retrieval_size: 100
  allow_intent_classification: true
  allow_sql_generation_reasoning: true
  allow_sql_functions_retrieval: true
  enable_column_pruning: false
  max_sql_correction_retries: 3
  query_cache_maxsize: 1000
  query_cache_ttl: 3600
  langfuse_host: https://cloud.langfuse.com
  langfuse_enable: true
  logging_level: DEBUG
  development: true
  historical_question_retrieval_similarity_threshold: 0.9
  sql_pairs_similarity_threshold: 0.7
  sql_pairs_retrieval_max_size: 10
  instructions_similarity_threshold: 0.7
  instructions_top_k: 10

To Reproduce Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]

Wren AI Information

  • Version: main latest

Additional context Add any other context about the problem here.

Relevant log output

  • [wren-ai-service:0.27.1]

bash > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error<

Node inputs:

{'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...',

'histories': [],

'query': '不同采购订单类型的订单金额分布情况如何?'}


Traceback (most recent call last):

File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn

await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)

^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper

self._handle_exception(observation, e)

File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception

raise e

File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper

result = await func(*args, **kwargs)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding

return await embedder.run(query)

       ^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry

ret = await target(*args, **kwargs)

      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/src/providers/embedder/litellm.py", line 62, in run

response = await aembedding(

           ^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async

raise e

File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async

result = await original_function(*args, **kwargs)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    


```

we1s3 avatar Aug 20 '25 10:08 we1s3

@we1s3 could you attache complete wren-ai-service container log file? thanks

cyyeh avatar Aug 20 '25 10:08 cyyeh

Looks like what you’re running into is very close to Problem #7 (provider / embedding mismatch in config.yaml). Basically when the embedding model (like qwen text-embedding-v3) and the LLM provider settings don’t align 100%, the runtime throws that “failed to create asking task” error.

If you want, I can share the detailed checklist that maps exactly which mismatches usually trigger this just let me know. 👍

onestardao avatar Aug 20 '25 12:08 onestardao

@we1s3 could you attache complete wren-ai-service container log file? thanks

Thank you for your reply. Perhaps this error is related to a mismatched vector model?

2025-08-20 17:39:41.575 | 2025-08-20 17:39:41.577 | 2025-08-20 17:39:41.577 | I0820 09:39:41.577 8 wren-ai-service:32] Project ID: None Converting SQL pairs to documents... 2025-08-20 17:39:41.578 | INFO: 172.18.0.6:52172 - "GET /v1/semantics-preparations/4ab8f9f2e44118b50727fbb8623936086abe9f9c/status HTTP/1.1" 200 OK 2025-08-20 17:39:41.579 | 2025-08-20 17:39:41.580 | I0820 09:39:41.580 8 wren-ai-service:32] Project ID: None, Cleaning documents in project_meta 2025-08-20 17:39:41.582 | I0820 09:39:41.582 8 wren-ai-service:32] Project ID: None, Cleaning documents in view_questions 2025-08-20 17:39:42.813 | W0820 09:39:42.813 8 wren-ai-service:291] Calling QdrantDocumentStore.write_documents() with empty list 2025-08-20 17:39:42.815 | INFO: 172.18.0.6:46798 - "GET /v1/semantics-preparations/4ab8f9f2e44118b50727fbb8623936086abe9f9c/status HTTP/1.1" 200 OK 2025-08-20 17:39:42.820 | 2025-08-20 17:39:42.825 | 2025-08-20 17:39:44.278 | 2025-08-20 17:39:44.278 | ******************************************************************************** 2025-08-20 17:39:44.278 | > embedding [src.pipelines.indexing.table_description.embedding()] encountered an error< 2025-08-20 17:39:44.278 | > Node inputs: 2025-08-20 17:39:44.278 | {'chunk': "<Task finished name='Task-25' coro=<AsyncGraphAdap...", 2025-08-20 17:39:44.278 | 'embedder': '<src.providers.embedder.litellm.AsyncDocumentEmbed...'} 2025-08-20 17:39:44.278 | ******************************************************************************** 2025-08-20 17:39:44.278 | Traceback (most recent call last): 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:44.278 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:44.278 | self._handle_exception(observation, e) 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:44.278 | raise e 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:44.278 | result = await func(*args, **kwargs) 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/src/pipelines/indexing/table_description.py", line 97, in embedding 2025-08-20 17:39:44.278 | return await embedder.run(documents=chunk["documents"]) 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:44.278 | ret = await target(*args, **kwargs) 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/src/providers/embedder/litellm.py", line 154, in run 2025-08-20 17:39:44.278 | embeddings, meta = await self._embed_batch( 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/src/providers/embedder/litellm.py", line 115, in _embed_batch 2025-08-20 17:39:44.278 | responses = await asyncio.gather( 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/src/providers/embedder/litellm.py", line 101, in embed_single_batch 2025-08-20 17:39:44.278 | return await aembedding( 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:44.278 | raise e 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:44.278 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:44.278 | raise exception_type( 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:44.278 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:44.278 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:44.278 | result = original_function(*args, **kwargs) 2025-08-20 17:39:44.278 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:44.278 | raise exception_type( 2025-08-20 17:39:44.278 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:44.278 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:44.278 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:44.278 | ------------------------------------------------------------------- 2025-08-20 17:39:44.278 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:44.278 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:44.278 | ------------------------------------------------------------------- 2025-08-20 17:39:44.278 | 2025-08-20 17:39:44.281 | E0820 09:39:44.279 8 wren-ai-service:100] Failed to prepare semantics: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:44.281 | Traceback (most recent call last): 2025-08-20 17:39:44.281 | File "/src/web/v1/services/semantics_preparation.py", line 92, in prepare_semantics 2025-08-20 17:39:44.282 | await asyncio.gather(*tasks) 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:44.282 | self._handle_exception(observation, e) 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:44.282 | raise e 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:44.282 | result = await func(*args, **kwargs) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/src/pipelines/indexing/table_description.py", line 153, in run 2025-08-20 17:39:44.282 | return await self._pipe.execute( 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 375, in execute 2025-08-20 17:39:44.282 | raise e 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 366, in execute 2025-08-20 17:39:44.282 | outputs = await self.raw_execute(_final_vars, overrides, display_graph, inputs=inputs) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 326, in raw_execute 2025-08-20 17:39:44.282 | raise e 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 321, in raw_execute 2025-08-20 17:39:44.282 | results = await await_dict_of_tasks(task_dict) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 23, in await_dict_of_tasks 2025-08-20 17:39:44.282 | coroutines_gathered = await asyncio.gather(*coroutines) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 36, in process_value 2025-08-20 17:39:44.282 | return await val 2025-08-20 17:39:44.282 | ^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 91, in new_fn 2025-08-20 17:39:44.282 | fn_kwargs = await await_dict_of_tasks(task_dict) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 23, in await_dict_of_tasks 2025-08-20 17:39:44.282 | coroutines_gathered = await asyncio.gather(*coroutines) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 36, in process_value 2025-08-20 17:39:44.282 | return await val 2025-08-20 17:39:44.282 | ^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 91, in new_fn 2025-08-20 17:39:44.282 | fn_kwargs = await await_dict_of_tasks(task_dict) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 23, in await_dict_of_tasks 2025-08-20 17:39:44.282 | coroutines_gathered = await asyncio.gather(*coroutines) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 36, in process_value 2025-08-20 17:39:44.282 | return await val 2025-08-20 17:39:44.282 | ^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:44.282 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:44.282 | self._handle_exception(observation, e) 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:44.282 | raise e 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:44.282 | result = await func(*args, **kwargs) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/src/pipelines/indexing/table_description.py", line 97, in embedding 2025-08-20 17:39:44.282 | return await embedder.run(documents=chunk["documents"]) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:44.282 | ret = await target(*args, **kwargs) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/src/providers/embedder/litellm.py", line 154, in run 2025-08-20 17:39:44.282 | embeddings, meta = await self._embed_batch( 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/src/providers/embedder/litellm.py", line 115, in _embed_batch 2025-08-20 17:39:44.282 | responses = await asyncio.gather( 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/src/providers/embedder/litellm.py", line 101, in embed_single_batch 2025-08-20 17:39:44.282 | return await aembedding( 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:44.282 | raise e 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:44.282 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:44.282 | raise exception_type( 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:44.282 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:44.282 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:44.282 | result = original_function(*args, **kwargs) 2025-08-20 17:39:44.282 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:44.282 | raise exception_type( 2025-08-20 17:39:44.282 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:44.282 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:44.282 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:44.810 | 2025-08-20 17:39:44.810 | ******************************************************************************** 2025-08-20 17:39:44.810 | > embedding [src.pipelines.indexing.db_schema.embedding()] encountered an error< 2025-08-20 17:39:44.810 | > Node inputs: 2025-08-20 17:39:44.810 | {'chunk': "<Task finished name='Task-11' coro=<AsyncGraphAdap...", 2025-08-20 17:39:44.810 | 'embedder': '<src.providers.embedder.litellm.AsyncDocumentEmbed...'} 2025-08-20 17:39:44.810 | ******************************************************************************** 2025-08-20 17:39:44.810 | Traceback (most recent call last): 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:44.810 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:44.810 | self._handle_exception(observation, e) 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:44.810 | raise e 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:44.810 | result = await func(*args, **kwargs) 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/src/pipelines/indexing/db_schema.py", line 313, in embedding 2025-08-20 17:39:44.810 | return await embedder.run(documents=chunk["documents"]) 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:44.810 | ret = await target(*args, **kwargs) 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/src/providers/embedder/litellm.py", line 154, in run 2025-08-20 17:39:44.810 | embeddings, meta = await self._embed_batch( 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/src/providers/embedder/litellm.py", line 115, in _embed_batch 2025-08-20 17:39:44.810 | responses = await asyncio.gather( 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/src/providers/embedder/litellm.py", line 101, in embed_single_batch 2025-08-20 17:39:44.810 | return await aembedding( 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:44.810 | raise e 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:44.810 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:44.810 | raise exception_type( 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:44.810 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:44.810 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:44.810 | result = original_function(*args, **kwargs) 2025-08-20 17:39:44.810 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:44.810 | raise exception_type( 2025-08-20 17:39:44.810 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:44.810 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:44.810 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:44.810 | ------------------------------------------------------------------- 2025-08-20 17:39:44.810 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:44.810 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:44.810 | ------------------------------------------------------------------- 2025-08-20 17:39:44.810 | 2025-08-20 17:39:44.820 | INFO: 172.18.0.6:46804 - "GET /v1/semantics-preparations/4ab8f9f2e44118b50727fbb8623936086abe9f9c/status HTTP/1.1" 200 OK 2025-08-20 17:39:44.837 | INFO: 172.18.0.6:46814 - "POST /v1/question-recommendations HTTP/1.1" 200 OK 2025-08-20 17:39:44.839 | I0820 09:39:44.838 8 wren-ai-service:187] Request 86758c6c-1c38-423b-9a15-880991b7a008: Generate Question Recommendation pipeline is running... 2025-08-20 17:39:44.843 | I0820 09:39:44.842 8 wren-ai-service:262] Question Recommendation pipeline is running... 2025-08-20 17:39:45.497 | INFO: 172.18.0.6:46820 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:46.500 | INFO: 172.18.0.6:46832 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:47.502 | INFO: 172.18.0.6:46838 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:48.502 | INFO: 172.18.0.6:46840 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:49.505 | INFO: 172.18.0.6:46846 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:50.159 | INFO: 172.18.0.6:46848 - "POST /v1/question-recommendations HTTP/1.1" 200 OK 2025-08-20 17:39:50.160 | I0820 09:39:50.160 8 wren-ai-service:187] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Generate Question Recommendation pipeline is running... 2025-08-20 17:39:50.164 | I0820 09:39:50.164 8 wren-ai-service:262] Question Recommendation pipeline is running... 2025-08-20 17:39:50.511 | INFO: 172.18.0.6:46862 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:51.515 | INFO: 172.18.0.6:46864 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:52.519 | INFO: 172.18.0.6:44164 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:53.133 | I0820 09:39:53.133 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:53.134 | I0820 09:39:53.134 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:53.134 | I0820 09:39:53.134 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:53.134 | I0820 09:39:53.134 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:53.135 | I0820 09:39:53.135 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:53.135 | I0820 09:39:53.135 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:53.135 | I0820 09:39:53.135 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:53.137 | I0820 09:39:53.137 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:53.137 | I0820 09:39:53.137 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:53.138 | I0820 09:39:53.138 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:53.139 | I0820 09:39:53.139 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:53.139 | I0820 09:39:53.139 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:53.140 | I0820 09:39:53.139 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:53.140 | I0820 09:39:53.140 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:53.140 | I0820 09:39:53.140 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:53.141 | I0820 09:39:53.140 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:53.141 | I0820 09:39:53.141 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:53.141 | I0820 09:39:53.141 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:53.141 | I0820 09:39:53.141 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:53.142 | I0820 09:39:53.142 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:53.142 | I0820 09:39:53.142 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:53.142 | I0820 09:39:53.142 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:53.143 | I0820 09:39:53.143 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:53.143 | I0820 09:39:53.143 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:53.143 | I0820 09:39:53.143 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:53.144 | I0820 09:39:53.144 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:53.144 | I0820 09:39:53.144 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:53.411 | 2025-08-20 17:39:53.411 | ******************************************************************************** 2025-08-20 17:39:53.411 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:53.411 | > Node inputs: 2025-08-20 17:39:53.411 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:53.411 | 'histories': [], 2025-08-20 17:39:53.411 | 'query': "'What is the total order value of outsourcing orde..."} 2025-08-20 17:39:53.411 | ******************************************************************************** 2025-08-20 17:39:53.411 | Traceback (most recent call last): 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:53.411 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:53.411 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:53.411 | self._handle_exception(observation, e) 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:53.411 | raise e 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:53.411 | result = await func(*args, **kwargs) 2025-08-20 17:39:53.411 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.411 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:53.411 | return await embedder.run(query) 2025-08-20 17:39:53.411 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:53.411 | ret = await target(*args, **kwargs) 2025-08-20 17:39:53.411 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.411 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:53.411 | response = await aembedding( 2025-08-20 17:39:53.411 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:53.411 | raise e 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:53.411 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:53.411 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:53.411 | raise exception_type( 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:53.411 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:53.411 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.411 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:53.411 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:53.411 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:53.411 | result = original_function(*args, **kwargs) 2025-08-20 17:39:53.411 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:53.411 | raise exception_type( 2025-08-20 17:39:53.411 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:53.411 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:53.411 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:53.411 | ------------------------------------------------------------------- 2025-08-20 17:39:53.411 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:53.411 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:53.411 | ------------------------------------------------------------------- 2025-08-20 17:39:53.411 | 2025-08-20 17:39:53.412 | E0820 09:39:53.411 8 wren-ai-service:156] Request 86758c6c-1c38-423b-9a15-880991b7a008: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:53.516 | INFO: 172.18.0.6:44170 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:53.584 | 2025-08-20 17:39:53.584 | ******************************************************************************** 2025-08-20 17:39:53.584 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:53.584 | > Node inputs: 2025-08-20 17:39:53.584 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:53.584 | 'histories': [], 2025-08-20 17:39:53.584 | 'query': "'Which suppliers have the highest number of active..."} 2025-08-20 17:39:53.584 | ******************************************************************************** 2025-08-20 17:39:53.584 | Traceback (most recent call last): 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:53.584 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:53.584 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:53.584 | self._handle_exception(observation, e) 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:53.584 | raise e 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:53.584 | result = await func(*args, **kwargs) 2025-08-20 17:39:53.584 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.584 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:53.584 | return await embedder.run(query) 2025-08-20 17:39:53.584 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:53.584 | ret = await target(*args, **kwargs) 2025-08-20 17:39:53.584 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.584 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:53.584 | response = await aembedding( 2025-08-20 17:39:53.584 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:53.584 | raise e 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:53.584 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:53.584 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:53.584 | raise exception_type( 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:53.584 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:53.584 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.584 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:53.584 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:53.584 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:53.584 | result = original_function(*args, **kwargs) 2025-08-20 17:39:53.584 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:53.584 | raise exception_type( 2025-08-20 17:39:53.584 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:53.584 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:53.584 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:53.584 | ------------------------------------------------------------------- 2025-08-20 17:39:53.584 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:53.584 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:53.584 | ------------------------------------------------------------------- 2025-08-20 17:39:53.584 | 2025-08-20 17:39:53.585 | E0820 09:39:53.585 8 wren-ai-service:156] Request 86758c6c-1c38-423b-9a15-880991b7a008: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:53.777 | 2025-08-20 17:39:53.777 | ******************************************************************************** 2025-08-20 17:39:53.777 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:53.777 | > Node inputs: 2025-08-20 17:39:53.777 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:53.777 | 'histories': [], 2025-08-20 17:39:53.777 | 'query': "'What is the average number of SKUs per stock bill..."} 2025-08-20 17:39:53.777 | ******************************************************************************** 2025-08-20 17:39:53.777 | Traceback (most recent call last): 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:53.777 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:53.777 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:53.777 | self._handle_exception(observation, e) 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:53.777 | raise e 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:53.777 | result = await func(*args, **kwargs) 2025-08-20 17:39:53.777 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.777 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:53.777 | return await embedder.run(query) 2025-08-20 17:39:53.777 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:53.777 | ret = await target(*args, **kwargs) 2025-08-20 17:39:53.777 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.777 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:53.777 | response = await aembedding( 2025-08-20 17:39:53.777 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:53.777 | raise e 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:53.777 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:53.777 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:53.777 | raise exception_type( 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:53.777 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:53.777 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.777 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:53.777 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:53.777 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:53.777 | result = original_function(*args, **kwargs) 2025-08-20 17:39:53.777 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:53.777 | raise exception_type( 2025-08-20 17:39:53.777 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:53.777 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:53.777 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:53.777 | ------------------------------------------------------------------- 2025-08-20 17:39:53.777 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:53.777 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:53.777 | ------------------------------------------------------------------- 2025-08-20 17:39:53.777 | 2025-08-20 17:39:53.778 | E0820 09:39:53.778 8 wren-ai-service:156] Request 86758c6c-1c38-423b-9a15-880991b7a008: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:54.095 | 2025-08-20 17:39:54.096 | ******************************************************************************** 2025-08-20 17:39:54.096 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:54.096 | > Node inputs: 2025-08-20 17:39:54.096 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:54.096 | 'histories': [], 2025-08-20 17:39:54.096 | 'query': "'How does the delivery amount of purchase orders v..."} 2025-08-20 17:39:54.096 | ******************************************************************************** 2025-08-20 17:39:54.096 | Traceback (most recent call last): 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:54.096 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:54.096 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:54.096 | self._handle_exception(observation, e) 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:54.096 | raise e 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:54.096 | result = await func(*args, **kwargs) 2025-08-20 17:39:54.096 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.096 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:54.096 | return await embedder.run(query) 2025-08-20 17:39:54.096 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:54.096 | ret = await target(*args, **kwargs) 2025-08-20 17:39:54.096 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.096 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:54.096 | response = await aembedding( 2025-08-20 17:39:54.096 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:54.096 | raise e 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:54.096 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:54.096 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:54.096 | raise exception_type( 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:54.096 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:54.096 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.096 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:54.096 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:54.096 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:54.096 | result = original_function(*args, **kwargs) 2025-08-20 17:39:54.096 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:54.096 | raise exception_type( 2025-08-20 17:39:54.096 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:54.096 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:54.096 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:54.096 | ------------------------------------------------------------------- 2025-08-20 17:39:54.096 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:54.096 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:54.096 | ------------------------------------------------------------------- 2025-08-20 17:39:54.096 | 2025-08-20 17:39:54.097 | E0820 09:39:54.097 8 wren-ai-service:156] Request 86758c6c-1c38-423b-9a15-880991b7a008: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:54.109 | 2025-08-20 17:39:54.109 | ******************************************************************************** 2025-08-20 17:39:54.109 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:54.109 | > Node inputs: 2025-08-20 17:39:54.109 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:54.109 | 'histories': [], 2025-08-20 17:39:54.109 | 'query': "'What was the total amount of purchase orders deli..."} 2025-08-20 17:39:54.109 | ******************************************************************************** 2025-08-20 17:39:54.109 | Traceback (most recent call last): 2025-08-20 17:39:54.109 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:54.109 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:54.109 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.109 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:54.109 | self._handle_exception(observation, e) 2025-08-20 17:39:54.109 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:54.109 | raise e 2025-08-20 17:39:54.109 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:54.109 | result = await func(*args, **kwargs) 2025-08-20 17:39:54.109 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.109 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:54.109 | return await embedder.run(query) 2025-08-20 17:39:54.109 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.110 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:54.110 | ret = await target(*args, **kwargs) 2025-08-20 17:39:54.110 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.110 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:54.110 | response = await aembedding( 2025-08-20 17:39:54.110 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.110 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:54.110 | raise e 2025-08-20 17:39:54.110 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:54.110 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:54.110 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.110 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:54.110 | raise exception_type( 2025-08-20 17:39:54.110 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:54.110 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:54.110 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.110 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:54.110 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:54.110 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.110 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:54.110 | result = original_function(*args, **kwargs) 2025-08-20 17:39:54.110 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.110 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:54.110 | raise exception_type( 2025-08-20 17:39:54.110 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:54.110 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:54.110 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:54.110 | ------------------------------------------------------------------- 2025-08-20 17:39:54.110 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:54.110 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:54.110 | ------------------------------------------------------------------- 2025-08-20 17:39:54.110 | 2025-08-20 17:39:54.110 | E0820 09:39:54.110 8 wren-ai-service:156] Request 86758c6c-1c38-423b-9a15-880991b7a008: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:54.523 | INFO: 172.18.0.6:44180 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:54.800 | 2025-08-20 17:39:54.800 | ******************************************************************************** 2025-08-20 17:39:54.800 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:54.800 | > Node inputs: 2025-08-20 17:39:54.800 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:54.800 | 'histories': [], 2025-08-20 17:39:54.800 | 'query': "'How has the number of outsourcing orders changed ..."} 2025-08-20 17:39:54.800 | ******************************************************************************** 2025-08-20 17:39:54.800 | Traceback (most recent call last): 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:54.800 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:54.800 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:54.800 | self._handle_exception(observation, e) 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:54.800 | raise e 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:54.800 | result = await func(*args, **kwargs) 2025-08-20 17:39:54.800 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.800 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:54.800 | return await embedder.run(query) 2025-08-20 17:39:54.800 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:54.800 | ret = await target(*args, **kwargs) 2025-08-20 17:39:54.800 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.800 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:54.800 | response = await aembedding( 2025-08-20 17:39:54.800 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:54.800 | raise e 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:54.800 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:54.800 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:54.800 | raise exception_type( 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:54.800 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:54.800 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.800 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:54.800 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:54.800 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:54.800 | result = original_function(*args, **kwargs) 2025-08-20 17:39:54.800 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:54.800 | raise exception_type( 2025-08-20 17:39:54.800 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:54.800 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:54.800 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:54.800 | ------------------------------------------------------------------- 2025-08-20 17:39:54.800 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:54.800 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:54.800 | ------------------------------------------------------------------- 2025-08-20 17:39:54.800 | 2025-08-20 17:39:54.801 | E0820 09:39:54.801 8 wren-ai-service:156] Request 86758c6c-1c38-423b-9a15-880991b7a008: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:55.180 | 2025-08-20 17:39:55.180 | ******************************************************************************** 2025-08-20 17:39:55.180 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:55.180 | > Node inputs: 2025-08-20 17:39:55.180 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:55.180 | 'histories': [], 2025-08-20 17:39:55.180 | 'query': "'Which suppliers contributed the most to purchase ..."} 2025-08-20 17:39:55.180 | ******************************************************************************** 2025-08-20 17:39:55.180 | Traceback (most recent call last): 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:55.180 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:55.180 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:55.180 | self._handle_exception(observation, e) 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:55.180 | raise e 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:55.180 | result = await func(*args, **kwargs) 2025-08-20 17:39:55.180 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.180 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:55.180 | return await embedder.run(query) 2025-08-20 17:39:55.180 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:55.180 | ret = await target(*args, **kwargs) 2025-08-20 17:39:55.180 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.180 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:55.180 | response = await aembedding( 2025-08-20 17:39:55.180 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:55.180 | raise e 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:55.180 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:55.180 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:55.180 | raise exception_type( 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:55.180 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:55.180 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.180 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:55.180 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:55.180 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:55.180 | result = original_function(*args, **kwargs) 2025-08-20 17:39:55.180 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:55.180 | raise exception_type( 2025-08-20 17:39:55.180 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:55.180 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:55.180 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:55.180 | ------------------------------------------------------------------- 2025-08-20 17:39:55.180 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:55.180 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:55.180 | ------------------------------------------------------------------- 2025-08-20 17:39:55.180 | 2025-08-20 17:39:55.181 | E0820 09:39:55.181 8 wren-ai-service:156] Request 86758c6c-1c38-423b-9a15-880991b7a008: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:55.372 | 2025-08-20 17:39:55.372 | ******************************************************************************** 2025-08-20 17:39:55.372 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:55.372 | > Node inputs: 2025-08-20 17:39:55.372 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:55.372 | 'histories': [], 2025-08-20 17:39:55.372 | 'query': "'Which types of stock records (inbound, outbound, ..."} 2025-08-20 17:39:55.372 | ******************************************************************************** 2025-08-20 17:39:55.372 | Traceback (most recent call last): 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:55.372 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:55.372 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:55.372 | self._handle_exception(observation, e) 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:55.372 | raise e 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:55.372 | result = await func(*args, **kwargs) 2025-08-20 17:39:55.372 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.372 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:55.372 | return await embedder.run(query) 2025-08-20 17:39:55.372 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:55.372 | ret = await target(*args, **kwargs) 2025-08-20 17:39:55.372 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.372 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:55.372 | response = await aembedding( 2025-08-20 17:39:55.372 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:55.372 | raise e 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:55.372 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:55.372 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:55.372 | raise exception_type( 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:55.372 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:55.372 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.372 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:55.372 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:55.372 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:55.372 | result = original_function(*args, **kwargs) 2025-08-20 17:39:55.372 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:55.372 | raise exception_type( 2025-08-20 17:39:55.372 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:55.372 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:55.372 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:55.372 | ------------------------------------------------------------------- 2025-08-20 17:39:55.372 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:55.372 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:55.372 | ------------------------------------------------------------------- 2025-08-20 17:39:55.372 | 2025-08-20 17:39:55.373 | E0820 09:39:55.373 8 wren-ai-service:156] Request 86758c6c-1c38-423b-9a15-880991b7a008: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:55.522 | INFO: 172.18.0.6:44192 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:55.770 | 2025-08-20 17:39:55.770 | ******************************************************************************** 2025-08-20 17:39:55.770 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:55.770 | > Node inputs: 2025-08-20 17:39:55.770 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:55.770 | 'histories': [], 2025-08-20 17:39:55.770 | 'query': "'How does the total number of stock entries compar..."} 2025-08-20 17:39:55.770 | ******************************************************************************** 2025-08-20 17:39:55.770 | Traceback (most recent call last): 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:55.770 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:55.770 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:55.770 | self._handle_exception(observation, e) 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:55.770 | raise e 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:55.770 | result = await func(*args, **kwargs) 2025-08-20 17:39:55.770 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.770 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:55.770 | return await embedder.run(query) 2025-08-20 17:39:55.770 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:55.770 | ret = await target(*args, **kwargs) 2025-08-20 17:39:55.770 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.770 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:55.770 | response = await aembedding( 2025-08-20 17:39:55.770 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:55.770 | raise e 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:55.770 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:55.770 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:55.770 | raise exception_type( 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:55.770 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:55.770 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.770 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:55.770 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:55.770 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:55.770 | result = original_function(*args, **kwargs) 2025-08-20 17:39:55.770 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:55.770 | raise exception_type( 2025-08-20 17:39:55.770 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:55.770 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:55.770 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:55.770 | ------------------------------------------------------------------- 2025-08-20 17:39:55.770 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:55.770 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:55.770 | ------------------------------------------------------------------- 2025-08-20 17:39:55.770 | 2025-08-20 17:39:55.771 | E0820 09:39:55.770 8 wren-ai-service:156] Request 86758c6c-1c38-423b-9a15-880991b7a008: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:56.521 | INFO: 172.18.0.6:44198 - "GET /v1/question-recommendations/86758c6c-1c38-423b-9a15-880991b7a008 HTTP/1.1" 200 OK 2025-08-20 17:39:56.884 | I0820 09:39:56.884 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:56.884 | I0820 09:39:56.884 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:56.884 | I0820 09:39:56.884 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:56.885 | I0820 09:39:56.885 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:56.885 | I0820 09:39:56.885 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:56.885 | I0820 09:39:56.885 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:56.886 | I0820 09:39:56.886 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:56.886 | I0820 09:39:56.886 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:56.886 | I0820 09:39:56.886 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:56.886 | I0820 09:39:56.886 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:56.887 | I0820 09:39:56.887 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:56.887 | I0820 09:39:56.887 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:56.887 | I0820 09:39:56.887 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:56.888 | I0820 09:39:56.887 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:56.888 | I0820 09:39:56.888 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:56.888 | I0820 09:39:56.888 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:56.888 | I0820 09:39:56.888 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:56.888 | I0820 09:39:56.888 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:56.889 | I0820 09:39:56.888 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:56.889 | I0820 09:39:56.889 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:56.889 | I0820 09:39:56.889 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:56.889 | I0820 09:39:56.889 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:56.889 | I0820 09:39:56.889 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:56.890 | I0820 09:39:56.890 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:56.890 | I0820 09:39:56.890 8 wren-ai-service:507] Ask Retrieval pipeline is running... 2025-08-20 17:39:56.891 | I0820 09:39:56.890 8 wren-ai-service:147] SqlPairs Retrieval pipeline is running... 2025-08-20 17:39:56.891 | I0820 09:39:56.891 8 wren-ai-service:218] Instructions Retrieval pipeline is running... 2025-08-20 17:39:57.396 | 2025-08-20 17:39:57.396 | ******************************************************************************** 2025-08-20 17:39:57.396 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:57.396 | > Node inputs: 2025-08-20 17:39:57.396 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:57.396 | 'histories': [], 2025-08-20 17:39:57.396 | 'query': '根据采购金额,哪些客户群体是主要的采购来源?'} 2025-08-20 17:39:57.396 | ******************************************************************************** 2025-08-20 17:39:57.396 | Traceback (most recent call last): 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:57.396 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:57.396 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:57.396 | self._handle_exception(observation, e) 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:57.396 | raise e 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:57.396 | result = await func(*args, **kwargs) 2025-08-20 17:39:57.396 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.396 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:57.396 | return await embedder.run(query) 2025-08-20 17:39:57.396 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:57.396 | ret = await target(*args, **kwargs) 2025-08-20 17:39:57.396 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.396 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:57.396 | response = await aembedding( 2025-08-20 17:39:57.396 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:57.396 | raise e 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:57.396 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:57.396 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:57.396 | raise exception_type( 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:57.396 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:57.396 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.396 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:57.396 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:57.396 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:57.396 | result = original_function(*args, **kwargs) 2025-08-20 17:39:57.396 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:57.396 | raise exception_type( 2025-08-20 17:39:57.396 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:57.396 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:57.396 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:57.396 | ------------------------------------------------------------------- 2025-08-20 17:39:57.396 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:57.396 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:57.396 | ------------------------------------------------------------------- 2025-08-20 17:39:57.396 | 2025-08-20 17:39:57.405 | 2025-08-20 17:39:57.405 | ******************************************************************************** 2025-08-20 17:39:57.405 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:57.405 | > Node inputs: 2025-08-20 17:39:57.405 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:57.405 | 'histories': [], 2025-08-20 17:39:57.405 | 'query': '不同订单状态(执行中、已完结)下的平均付款周期有何差异?'} 2025-08-20 17:39:57.405 | ******************************************************************************** 2025-08-20 17:39:57.405 | Traceback (most recent call last): 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:57.405 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:57.405 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:57.405 | self._handle_exception(observation, e) 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:57.405 | raise e 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:57.405 | result = await func(*args, **kwargs) 2025-08-20 17:39:57.405 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.405 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:57.405 | return await embedder.run(query) 2025-08-20 17:39:57.405 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:57.405 | ret = await target(*args, **kwargs) 2025-08-20 17:39:57.405 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.405 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:57.405 | response = await aembedding( 2025-08-20 17:39:57.405 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:57.405 | raise e 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:57.405 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:57.405 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:57.405 | raise exception_type( 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:57.405 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:57.405 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.405 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:57.405 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:57.405 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:57.405 | result = original_function(*args, **kwargs) 2025-08-20 17:39:57.405 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:57.405 | raise exception_type( 2025-08-20 17:39:57.405 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:57.405 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:57.405 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:57.405 | E0820 09:39:57.405 8 wren-ai-service:156] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:57.405 | ------------------------------------------------------------------- 2025-08-20 17:39:57.405 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:57.405 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:57.405 | ------------------------------------------------------------------- 2025-08-20 17:39:57.405 | 2025-08-20 17:39:57.406 | E0820 09:39:57.406 8 wren-ai-service:156] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:57.996 | 2025-08-20 17:39:57.996 | ******************************************************************************** 2025-08-20 17:39:57.996 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:57.996 | > Node inputs: 2025-08-20 17:39:57.996 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:57.996 | 'histories': [], 2025-08-20 17:39:57.996 | 'query': '哪些供应商的订单完成率(已入库订单比例)最高?'} 2025-08-20 17:39:57.996 | ******************************************************************************** 2025-08-20 17:39:57.996 | Traceback (most recent call last): 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:57.996 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:57.996 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:57.996 | self._handle_exception(observation, e) 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:57.996 | raise e 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:57.996 | result = await func(*args, **kwargs) 2025-08-20 17:39:57.996 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.996 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:57.996 | return await embedder.run(query) 2025-08-20 17:39:57.996 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:57.996 | ret = await target(*args, **kwargs) 2025-08-20 17:39:57.996 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.996 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:57.996 | response = await aembedding( 2025-08-20 17:39:57.996 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:57.996 | raise e 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:57.996 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:57.996 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:57.996 | raise exception_type( 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:57.996 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:57.996 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.996 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:57.996 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:57.996 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:57.996 | result = original_function(*args, **kwargs) 2025-08-20 17:39:57.996 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:57.996 | raise exception_type( 2025-08-20 17:39:57.996 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:57.996 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:57.996 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:57.996 | ------------------------------------------------------------------- 2025-08-20 17:39:57.996 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:57.996 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:57.996 | ------------------------------------------------------------------- 2025-08-20 17:39:57.996 | 2025-08-20 17:39:57.996 | E0820 09:39:57.996 8 wren-ai-service:156] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:58.518 | 2025-08-20 17:39:58.518 | ******************************************************************************** 2025-08-20 17:39:58.518 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:58.518 | > Node inputs: 2025-08-20 17:39:58.518 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:58.518 | 'histories': [], 2025-08-20 17:39:58.518 | 'query': '2023年与2024年相比,采购订单总金额的增长率是多少?'} 2025-08-20 17:39:58.518 | ******************************************************************************** 2025-08-20 17:39:58.518 | Traceback (most recent call last): 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:58.518 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:58.518 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:58.518 | self._handle_exception(observation, e) 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:58.518 | raise e 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:58.518 | result = await func(*args, **kwargs) 2025-08-20 17:39:58.518 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.518 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:58.518 | return await embedder.run(query) 2025-08-20 17:39:58.518 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:58.518 | ret = await target(*args, **kwargs) 2025-08-20 17:39:58.518 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.518 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:58.518 | response = await aembedding( 2025-08-20 17:39:58.518 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:58.518 | raise e 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:58.518 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:58.518 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:58.518 | raise exception_type( 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:58.518 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:58.518 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.518 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:58.518 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:58.518 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:58.518 | result = original_function(*args, **kwargs) 2025-08-20 17:39:58.518 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:58.518 | raise exception_type( 2025-08-20 17:39:58.518 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:58.518 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:58.518 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:58.518 | ------------------------------------------------------------------- 2025-08-20 17:39:58.518 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:58.518 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:58.518 | ------------------------------------------------------------------- 2025-08-20 17:39:58.518 | 2025-08-20 17:39:58.519 | E0820 09:39:58.518 8 wren-ai-service:156] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:58.543 | 2025-08-20 17:39:58.543 | ******************************************************************************** 2025-08-20 17:39:58.543 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:58.543 | > Node inputs: 2025-08-20 17:39:58.543 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:58.543 | 'histories': [], 2025-08-20 17:39:58.543 | 'query': '每月入库、出库和盘点记录的数量趋势如何变化?'} 2025-08-20 17:39:58.543 | ******************************************************************************** 2025-08-20 17:39:58.543 | Traceback (most recent call last): 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:58.543 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:58.543 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:58.543 | self._handle_exception(observation, e) 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:58.543 | raise e 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:58.543 | result = await func(*args, **kwargs) 2025-08-20 17:39:58.543 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.543 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:58.543 | return await embedder.run(query) 2025-08-20 17:39:58.543 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:58.543 | ret = await target(*args, **kwargs) 2025-08-20 17:39:58.543 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.543 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:58.543 | response = await aembedding( 2025-08-20 17:39:58.543 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:58.543 | raise e 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:58.543 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:58.543 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:58.543 | raise exception_type( 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:58.543 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:58.543 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.543 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:58.543 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:58.543 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:58.543 | result = original_function(*args, **kwargs) 2025-08-20 17:39:58.543 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:58.543 | raise exception_type( 2025-08-20 17:39:58.543 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:58.543 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:58.543 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:58.543 | ------------------------------------------------------------------- 2025-08-20 17:39:58.543 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:58.543 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:58.543 | ------------------------------------------------------------------- 2025-08-20 17:39:58.543 | 2025-08-20 17:39:58.544 | E0820 09:39:58.543 8 wren-ai-service:156] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:58.711 | 2025-08-20 17:39:58.711 | ******************************************************************************** 2025-08-20 17:39:58.711 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:58.711 | > Node inputs: 2025-08-20 17:39:58.711 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:58.711 | 'histories': [], 2025-08-20 17:39:58.711 | 'query': '哪些地区的供应商在付款状态上表现最佳?'} 2025-08-20 17:39:58.711 | ******************************************************************************** 2025-08-20 17:39:58.711 | Traceback (most recent call last): 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:58.711 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:58.711 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:58.711 | self._handle_exception(observation, e) 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:58.711 | raise e 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:58.711 | result = await func(*args, **kwargs) 2025-08-20 17:39:58.711 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.711 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:58.711 | return await embedder.run(query) 2025-08-20 17:39:58.711 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:58.711 | ret = await target(*args, **kwargs) 2025-08-20 17:39:58.711 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.711 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:58.711 | response = await aembedding( 2025-08-20 17:39:58.711 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:58.711 | raise e 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:58.711 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:58.711 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:58.711 | raise exception_type( 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:58.711 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:58.711 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.711 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:58.711 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:58.711 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:58.711 | result = original_function(*args, **kwargs) 2025-08-20 17:39:58.711 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:58.711 | raise exception_type( 2025-08-20 17:39:58.711 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:58.711 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:58.711 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:58.711 | ------------------------------------------------------------------- 2025-08-20 17:39:58.711 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:58.711 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:58.711 | ------------------------------------------------------------------- 2025-08-20 17:39:58.711 | 2025-08-20 17:39:58.712 | E0820 09:39:58.712 8 wren-ai-service:156] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:58.909 | 2025-08-20 17:39:58.909 | ******************************************************************************** 2025-08-20 17:39:58.909 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:58.909 | > Node inputs: 2025-08-20 17:39:58.909 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:58.909 | 'histories': [], 2025-08-20 17:39:58.909 | 'query': '外协订单与采购订单在退货金额上的差异有多大?'} 2025-08-20 17:39:58.909 | ******************************************************************************** 2025-08-20 17:39:58.909 | Traceback (most recent call last): 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:58.909 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:58.909 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:58.909 | self._handle_exception(observation, e) 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:58.909 | raise e 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:58.909 | result = await func(*args, **kwargs) 2025-08-20 17:39:58.909 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.909 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:58.909 | return await embedder.run(query) 2025-08-20 17:39:58.909 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:58.909 | ret = await target(*args, **kwargs) 2025-08-20 17:39:58.909 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.909 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:58.909 | response = await aembedding( 2025-08-20 17:39:58.909 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:58.909 | raise e 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:58.909 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:58.909 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:58.909 | raise exception_type( 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:58.909 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:58.909 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.909 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:58.909 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:58.909 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:58.909 | result = original_function(*args, **kwargs) 2025-08-20 17:39:58.909 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:58.909 | raise exception_type( 2025-08-20 17:39:58.909 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:58.909 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:58.909 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:58.909 | ------------------------------------------------------------------- 2025-08-20 17:39:58.909 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:58.909 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:58.909 | ------------------------------------------------------------------- 2025-08-20 17:39:58.909 | 2025-08-20 17:39:58.909 | E0820 09:39:58.909 8 wren-ai-service:156] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:59.219 | 2025-08-20 17:39:59.219 | ******************************************************************************** 2025-08-20 17:39:59.219 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:59.219 | > Node inputs: 2025-08-20 17:39:59.219 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:59.219 | 'histories': [], 2025-08-20 17:39:59.219 | 'query': '每个供应商在过去一年中平均交货时间是多少天?'} 2025-08-20 17:39:59.219 | ******************************************************************************** 2025-08-20 17:39:59.219 | Traceback (most recent call last): 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:59.219 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:59.219 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:59.219 | self._handle_exception(observation, e) 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:59.219 | raise e 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:59.219 | result = await func(*args, **kwargs) 2025-08-20 17:39:59.219 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.219 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:59.219 | return await embedder.run(query) 2025-08-20 17:39:59.219 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:59.219 | ret = await target(*args, **kwargs) 2025-08-20 17:39:59.219 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.219 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:59.219 | response = await aembedding( 2025-08-20 17:39:59.219 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:59.219 | raise e 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:59.219 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:59.219 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:59.219 | raise exception_type( 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:59.219 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:59.219 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.219 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:59.219 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:59.219 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:59.219 | result = original_function(*args, **kwargs) 2025-08-20 17:39:59.219 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:59.219 | raise exception_type( 2025-08-20 17:39:59.219 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:59.219 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:59.219 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:59.219 | ------------------------------------------------------------------- 2025-08-20 17:39:59.219 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:59.219 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:59.219 | ------------------------------------------------------------------- 2025-08-20 17:39:59.219 | 2025-08-20 17:39:59.220 | E0820 09:39:59.220 8 wren-ai-service:156] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:59.365 | 2025-08-20 17:39:59.365 | ******************************************************************************** 2025-08-20 17:39:59.365 | > embedding [src.pipelines.retrieval.db_schema_retrieval.embedding()] encountered an error< 2025-08-20 17:39:59.365 | > Node inputs: 2025-08-20 17:39:59.365 | {'embedder': '<src.providers.embedder.litellm.AsyncTextEmbedder ...', 2025-08-20 17:39:59.365 | 'histories': [], 2025-08-20 17:39:59.365 | 'query': '不同采购订单类型的订单金额分布情况如何?'} 2025-08-20 17:39:59.365 | ******************************************************************************** 2025-08-20 17:39:59.365 | Traceback (most recent call last): 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn 2025-08-20 17:39:59.365 | await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs) 2025-08-20 17:39:59.365 | ^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper 2025-08-20 17:39:59.365 | self._handle_exception(observation, e) 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception 2025-08-20 17:39:59.365 | raise e 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper 2025-08-20 17:39:59.365 | result = await func(*args, **kwargs) 2025-08-20 17:39:59.365 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.365 | File "/src/pipelines/retrieval/db_schema_retrieval.py", line 134, in embedding 2025-08-20 17:39:59.365 | return await embedder.run(query) 2025-08-20 17:39:59.365 | ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry 2025-08-20 17:39:59.365 | ret = await target(*args, **kwargs) 2025-08-20 17:39:59.365 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.365 | File "/src/providers/embedder/litellm.py", line 62, in run 2025-08-20 17:39:59.365 | response = await aembedding( 2025-08-20 17:39:59.365 | ^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async 2025-08-20 17:39:59.365 | raise e 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async 2025-08-20 17:39:59.365 | result = await original_function(*args, **kwargs) 2025-08-20 17:39:59.365 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding 2025-08-20 17:39:59.365 | raise exception_type( 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3554, in aembedding 2025-08-20 17:39:59.365 | init_response = await loop.run_in_executor(None, func_with_context) 2025-08-20 17:39:59.365 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.365 | File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run 2025-08-20 17:39:59.365 | result = self.fn(*self.args, **self.kwargs) 2025-08-20 17:39:59.365 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1060, in wrapper 2025-08-20 17:39:59.365 | result = original_function(*args, **kwargs) 2025-08-20 17:39:59.365 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4403, in embedding 2025-08-20 17:39:59.365 | raise exception_type( 2025-08-20 17:39:59.365 | File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 4380, in embedding 2025-08-20 17:39:59.365 | raise LiteLLMUnknownProvider( 2025-08-20 17:39:59.365 | litellm.exceptions.LiteLLMUnknownProvider: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:39:59.365 | ------------------------------------------------------------------- 2025-08-20 17:39:59.365 | Oh no an error! Need help with Hamilton? 2025-08-20 17:39:59.365 | Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g 2025-08-20 17:39:59.365 | ------------------------------------------------------------------- 2025-08-20 17:39:59.365 | 2025-08-20 17:39:59.365 | E0820 09:39:59.365 8 wren-ai-service:156] Request b9405305-53c4-4ca3-9c01-f7b916718f42: Error validating question: litellm.BadRequestError: Unmapped LLM provider for this endpoint. You passed model=text-embedding-v3, custom_llm_provider=dashscope. Check supported provider and route: https://docs.litellm.ai/docs/providers 2025-08-20 17:55:56.040 | Project ID: None, Chunking views into documents: 0it [00:00, ?it/s] 2025-08-20 17:55:56.040 | Project ID: None, Chunking views into documents: 0it [00:00, ?it/s] 2025-08-20 17:55:56.040 | Project ID: None, Chunking table descriptions into documents: 0%| | 0/8 [00:00<?, ?it/s] 2025-08-20 17:55:56.040 | Project ID: None, Chunking table descriptions into documents: 100%|██████████| 8/8 [00:00<00:00, 37449.14it/s] 2025-08-20 17:55:56.040 | Project ID: None, Chunking DDL commands into documents: 0%| | 0/16 [00:00<?, ?it/s] 2025-08-20 17:55:56.040 | Project ID: None, Chunking DDL commands into documents: 100%|██████████| 16/16 [00:00<00:00, 447392.43it/s] 2025-08-20 17:55:56.040 | 0%| | 0/1 [00:00<?, ?it/s]W0820 09:39:42.820 8 wren-ai-service:291] Calling QdrantDocumentStore.write_documents() with empty list 2025-08-20 17:55:56.040 | 100it [00:00, 15348.57it/s]

we1s3 avatar Aug 21 '25 01:08 we1s3

Looks like what you’re running into is very close to Problem #7 (provider / embedding mismatch in config.yaml). Basically when the embedding model (like qwen text-embedding-v3) and the LLM provider settings don’t align 100%, the runtime throws that “failed to create asking task” error.

If you want, I can share the detailed checklist that maps exactly which mismatches usually trigger this just let me know.

Thank you very much for sharing. If the list is large, please send it to my email [email protected]

we1s3 avatar Aug 21 '25 01:08 we1s3

Looks like what you’re running into is very close to Problem #7 (provider / embedding mismatch in config.yaml). Basically when the embedding model (like qwen text-embedding-v3) and the LLM provider settings don’t align 100%, the runtime throws that “failed to create asking task” error.

If you want, I can share the detailed checklist that maps exactly which mismatches usually trigger this just let me know. 👍

Sorry, I don't understand the relationship between this and issue https://github.com/Canner/WrenAI/issues/7

we1s3 avatar Aug 21 '25 01:08 we1s3

sorry for the confusion. when i wrote # earlier i meant "problem number" and that can look like an issue link on GitHub. from now on i will use the format No.11 to avoid accidental linking.

based on what you described, this looks like Problem No.11 — memory persistence mismatch. it usually means the memory was written/serialized in one shape and read back in another, so the UI shows empty memory even though it was initialized. this is a semantic firewall pattern, not an infra change — often no server rebuild needed.

if you want the full checklist i keep for reproducible fixes, here it is: https://github.com/onestardao/WFGY/blob/main/ProblemMap/README.md

if you want help right now, paste the minimal logs after "server start" and one sample session request. i can point to the exact checks to run: session binding, storage location, role token alignment, and simple normalization steps.

onestardao avatar Aug 21 '25 03:08 onestardao

成功集成WrenAI与阿里云DashScope服务,使用最新的AI模型:

LLM模型: qwen-plus - 性能强大的大语言模型 Embedding模型: text-embedding-v4 - 最新的向量嵌入模型 详情见 https://github.com/lovrabet-ai/wrenai-qwen/blob/main/wrenai-dashscope-integration-guide.md

heiniuhaha avatar Sep 03 '25 15:09 heiniuhaha

将所有 model 的 provider 从 dashscope 换为 openai;(虽然我也不知道为什么,明明litellm文档写的是 dashscope) 举例: model: openai/qwen-plus

rainymoodone avatar Sep 24 '25 01:09 rainymoodone

将所有 model 的 provider 从 dashscope 换为 openai;(虽然我也不知道为什么,明明litellm文档写的是 dashscope) 举例: model: openai/qwen-plus

type: llm provider: litellm_llm models:

  • api_base: https://dashscope.aliyuncs.com/compatible-mode/v1 model: openai/qwen-max alias: default timeout: 120 kwargs: max_tokens: 4096 n: 1 seed: 0 temperature: 0

type: embedder provider: litellm_embedder models:

  • model: openai/text-embedding-v4 alias: default api_base: https://dashscope.aliyuncs.com/compatible-mode/v1 # change this according to your embedding model timeout: 120

type: engine provider: wren_ui endpoint: http://wren-ui:3000


type: engine provider: wren_ibis endpoint: http://ibis-server:8000


type: document_store provider: qdrant location: http://qdrant:6333 embedding_model_dim: 2048 # put your embedding model dimension here timeout: 120 recreate_index: true


type: pipeline pipes:

  • name: db_schema_indexing embedder: litellm_embedder.default document_store: qdrant
  • name: historical_question_indexing embedder: litellm_embedder.default document_store: qdrant
  • name: table_description_indexing embedder: litellm_embedder.default document_store: qdrant
  • name: db_schema_retrieval llm: litellm_llm.default embedder: litellm_embedder.default document_store: qdrant
  • name: historical_question_retrieval embedder: litellm_embedder.default document_store: qdrant
  • name: sql_generation llm: litellm_llm.default engine: wren_ui document_store: qdrant
  • name: sql_correction llm: litellm_llm.default engine: wren_ui document_store: qdrant
  • name: followup_sql_generation llm: litellm_llm.default engine: wren_ui document_store: qdrant
  • name: sql_answer llm: litellm_llm.default
  • name: semantics_description llm: litellm_llm.default
  • name: relationship_recommendation llm: litellm_llm.default engine: wren_ui
  • name: question_recommendation llm: litellm_llm.default
  • name: question_recommendation_db_schema_retrieval llm: litellm_llm.default embedder: litellm_embedder.default document_store: qdrant
  • name: question_recommendation_sql_generation llm: litellm_llm.default engine: wren_ui document_store: qdrant
  • name: chart_generation llm: litellm_llm.default
  • name: chart_adjustment llm: litellm_llm.default
  • name: intent_classification llm: litellm_llm.default embedder: litellm_embedder.default document_store: qdrant
  • name: misleading_assistance llm: litellm_llm.default
  • name: data_assistance llm: litellm_llm.default
  • name: sql_pairs_indexing document_store: qdrant embedder: litellm_embedder.default
  • name: sql_pairs_retrieval document_store: qdrant embedder: litellm_embedder.default llm: litellm_llm.default
  • name: preprocess_sql_data llm: litellm_llm.default
  • name: sql_executor engine: wren_ui
  • name: user_guide_assistance llm: litellm_llm.default
  • name: sql_question_generation llm: litellm_llm.default
  • name: sql_generation_reasoning llm: litellm_llm.default
  • name: followup_sql_generation_reasoning llm: litellm_llm.default
  • name: sql_regeneration llm: litellm_llm.default engine: wren_ui
  • name: instructions_indexing embedder: litellm_embedder.default document_store: qdrant
  • name: instructions_retrieval embedder: litellm_embedder.default document_store: qdrant
  • name: sql_functions_retrieval engine: wren_ibis document_store: qdrant
  • name: project_meta_indexing document_store: qdrant
  • name: sql_tables_extraction llm: litellm_llm.default
  • name: sql_diagnosis llm: litellm_llm.default

settings: engine_timeout: 30 column_indexing_batch_size: 50 table_retrieval_size: 10 table_column_retrieval_size: 100 allow_intent_classification: true allow_sql_generation_reasoning: true allow_sql_functions_retrieval: true enable_column_pruning: false max_sql_correction_retries: 3 query_cache_maxsize: 1000 query_cache_ttl: 3600 langfuse_host: https://cloud.langfuse.com langfuse_enable: true logging_level: DEBUG development: true historical_question_retrieval_similarity_threshold: 0.9 sql_pairs_similarity_threshold: 0.7 sql_pairs_retrieval_max_size: 10 instructions_similarity_threshold: 0.7 instructions_top_k: 10

大佬 能帮我看下我的为啥不行吗

COCO-hy avatar Sep 24 '25 03:09 COCO-hy

@COCO-hy 有错误信息吗? 应该和我不是一个问题

rainymoodone avatar Sep 24 '25 04:09 rainymoodone

@COCO-hy 有错误信息吗? 应该和我不是一个问题

客户端显示“Failed to create asking task.” 也是楼主的这个问题 是一个问题吗?

COCO-hy avatar Sep 24 '25 04:09 COCO-hy

客户端显示“Failed to create asking task.” 也是楼主的这个问题 是一个问题吗?

可以在docker中看下具体的错误 另外 .env 中的 OPENAI_API_KEY 有设置吗?

rainymoodone avatar Sep 24 '25 05:09 rainymoodone

.env是设置了key docker中怎么看他的错误日志呢?

客户端显示“Failed to create asking task.” 也是楼主的这个问题 是一个问题吗?

可以在docker中看下具体的错误 另外 .env 中的 OPENAI_API_KEY 有设置吗?

.env是设置了key docker中怎么看他的错误日志呢?

COCO-hy avatar Sep 24 '25 05:09 COCO-hy

客户端显示“Failed to create asking task.” 也是楼主的这个问题 是一个问题吗?

可以在docker中看下具体的错误 另外 .env 中的 OPENAI_API_KEY 有设置吗?

Image

COCO-hy avatar Sep 24 '25 05:09 COCO-hy

@rainymoodone 已解决 config.yaml文件换成那个默认的就可以了 在默认的基础上改

COCO-hy avatar Sep 24 '25 06:09 COCO-hy

@rainymoodone 已解决 config.yaml文件换成那个默认的就可以了 在默认的基础上改

提供config和 env配置文件参考一下

kevin10355325345 avatar Oct 27 '25 12:10 kevin10355325345

@rainymoodone 已解决 config.yaml文件换成那个默认的就可以了 在默认的基础上改

提供config和 env配置文件参考一下

config.yaml这是我的config,env就设置一下openai key就好了

COCO-hy avatar Oct 28 '25 01:10 COCO-hy

@rainymoodone 已解决 config.yaml文件换成那个默认的就可以了 在默认的基础上改

提供config和 env配置文件参考一下

config.yaml这是我的config,env就设置一下openai key就好了 openai key 按道理你这里没有使用openai,openai key的值是给的阿里云的DASHSCOPE_API_KEY 吗; 我这边因为阿里云的配置跑不通,用DEEPSEEK+本地docker里面部署ollama向量可以跑起来。还是学习一下阿里云的配置,长远工程化我们可能用阿里云的百炼

kevin10355325345 avatar Oct 28 '25 09:10 kevin10355325345

@rainymoodone 已解决 config.yaml文件换成那个默认的就可以了 在默认的基础上改

提供config和 env配置文件参考一下

config.yaml这是我的config,env就设置一下openai key就好了 openai key 按道理你这里没有使用openai,openai key的值是给的阿里云的DASHSCOPE_API_KEY 吗; 我这边因为阿里云的配置跑不通,用DEEPSEEK+本地docker里面部署ollama向量可以跑起来。还是学习一下阿里云的配置,长远工程化我们可能用阿里云的百炼

只要遵循的openai格式 ,你把你的DASHSCOPE_API_KEY改名为 openai key就可以了 base_url填百炼的就行

COCO-hy avatar Oct 28 '25 09:10 COCO-hy

@rainymoodone 已解决 config.yaml文件换成那个默认的就可以了 在默认的基础上改

提供config和 env配置文件参考一下

config.yaml这是我的config,env就设置一下openai key就好了 openai key 按道理你这里没有使用openai,openai key的值是给的阿里云的DASHSCOPE_API_KEY 吗; 我这边因为阿里云的配置跑不通,用DEEPSEEK+本地docker里面部署ollama向量可以跑起来。还是学习一下阿里云的配置,长远工程化我们可能用阿里云的百炼

只要遵循的openai格式 ,你把你的DASHSCOPE_API_KEY改名为 openai key就可以了 base_url填百炼的就行

非常感谢感谢

kevin10355325345 avatar Oct 28 '25 09:10 kevin10355325345

@rainymoodone 已解决 config.yaml文件换成那个默认的就可以了 在默认的基础上改

提供config和 env配置文件参考一下

config.yaml这是我的config,env就设置一下openai key就好了 openai key 按道理你这里没有使用openai,openai key的值是给的阿里云的DASHSCOPE_API_KEY 吗; 我这边因为阿里云的配置跑不通,用DEEPSEEK+本地docker里面部署ollama向量可以跑起来。还是学习一下阿里云的配置,长远工程化我们可能用阿里云的百炼

只要遵循的openai格式 ,你把你的DASHSCOPE_API_KEY改名为 openai key就可以了 base_url填百炼的就行

按照你的方式,成功接入了qwen,但是出现新的错误 BadRequestError: OpenAIException - Error code: 400 {'error': {'code': 'InvalidParameter', 'message': 'Value error, batch size is invalid, it should not be larger than 10.: input.contents'}}

意思是传入的 batch size(批量大小) 超过了允许的最大值 10。 qwen 的嵌入模型(embeddings)一次最多只能接受 10 条输入。

我修改了多个设置(包括 column_indexing_batch_size )均没有生效

null-ed avatar Nov 10 '25 03:11 null-ed