Reasoning model configuration issue — unstable behavior due to missing config options.
Could you clarify please?
Context: im making a rag system for laws Issue: the query tool return different chunks/laws each time due to the inconsistency of reasoning model (different keyword -> different chunks). Example: Run1: User ques: abc Reasoning -> keyword for bm25 search: a b c Run2: User ques: (the same) abc Reasoning -> (different) ab abc
I'm still not completely understanding, you're saying that the same user input results in different queries from the model and therefore different results?
What is this to do with the reasoning model? I'd say it's less to do with reasoning and more to do with the LLM's 'decision' as to how to search the data.
If you frame your input more specifically like
"Use BM25 search with the keyword
abcon my data"
Then it should be more consistent?
Ah yes llm decision to use which 'keyword' is the problem, each time it decide to search with different keyword. Therefore the results are unstable. I cant frame a exact keyword for my input due to the variety of laws that relevent to user question. Thanks for your time !
Hmm yeah, well this is just a byproduct of how agentic search will always work, unfortunately. Since an LLM is non-deterministic, there's always going to be some randomness. Otherwise, you may as well be implementing a regular BM25 search without an agentic step beforehand. This is completely fine of course, because agentic RAG is not suitable to 100% of usecases.
Is your issue something that you think can be fixed with a different implementation? Or do you think this is something related to flaws with using LLMs?
In the latter case, I think there's little we can do within Elysia to fix this, unfortunately.