[Bug] Model replies to greetings but fails querying Weaviate collections (logs include both gpt‑oss:20b and mistral:7b runs)
Elysia version
0.2.7
Installation method
pip install elysia-ai (via requirements)
Is the issue occurring in the Elysia package or the Elysia web app?
Elysia web app (using the app via elysia start)
What happened?
Description
The model responds correctly to simple prompts like "hi", but fails when trying to query a Weaviate collection. Two logs are attached, covering sessions with different models: mistral:7b and gpt-oss:20b.
Environment
- All components run locally: Elysia, LiteLLM, Ollama, Weaviate
- Python 3.12
- Weaviate endpoints:
http://weaviate:8080, gRPC port50051 - Models tested:
mistral:7b,gpt-oss:20b
MODEL_API_BASE=http://ollama:11434 BASE_PROVIDER=ollama COMPLEX_PROVIDER=ollama BASE_MODEL=mistral:7b COMPLEX_MODEL=mistral:7b WEAVIATE_URL=http://weaviate:8080 WEAVIATE_IS_LOCAL=true LOCAL_WEAVIATE_PORT=8080 LOCAL_WEAVIATE_GRPC_PORT=50051 WCD_URL= WCD_API_KEY=
Errors / Symptoms
-
litellm.APIConnectionError→JSONDecodeError: Expecting value: line 1 column 1 (char 0) - Exception: Model picked an action query_postprocessing that is not in the available tools: ['text_response', 'aggregate', 'query']
Expected Behavior
Queries to Weaviate (“Question” collection) should return valid results, allowing Elysia to generate concise summaries.
Attachments
elysia_gpt-oss_20b_error.txt elysia_mistral_7b_error.txt
Steps to reproduce
No response
Additional context
No response
Hey @gdapian thanks for the detailed report.
This looks like a local model error, unfortunately. I expect that if you repeated this query e.g. 5 times it might not error in every single run.
The issue is down to the complex nature of the Elysia agents. Especially the Query tool, which is outputting multiple structured outputs in a single LLM call.
However:
Simplifying model outputs and reducing the context window size for local models is planned for a future version of Elysia. Stay tuned!
So this is a known issue!
Let me know if you have the capacity to try a larger API-based LLM which is what Elysia was built with in mind, if that fixes, temporarily, the issue.
Hey @gdapian thanks for the detailed report.
This looks like a local model error, unfortunately. I expect that if you repeated this query e.g. 5 times it might not error in every single run.
The issue is down to the complex nature of the Elysia agents. Especially the Query tool, which is outputting multiple structured outputs in a single LLM call.
However:
Simplifying model outputs and reducing the context window size for local models is planned for a future version of Elysia. Stay tuned!
So this is a known issue!
Let me know if you have the capacity to try a larger API-based LLM which is what Elysia was built with in mind, if that fixes, temporarily, the issue.
Thanks @dannyjameswilliams for your reply. I can confirm that everything works with the Gemini 2.5 Flash API.