dataline
dataline copied to clipboard
Add an anti-hallucination node in agentic workflow
Privileged issue
- [X] I'm @ramiawar or he asked me directly to create an issue here.
Issue Content
Sometimes, LLM hallucinates data in the response, even when data security is enabled.
This is problematic because:
- Confuses users and makes them think security is compromised when it's not
Hey! Did you solve this at all?
Here is what we built - It will be open sourced early Jan 2025:
https://provably.ai/blog/introducing-proving-a-technique-to-rapidly-verify-and-trust-ai-answers
You are welcome to try it - I d love your feedback. Just fill out the form or mail me on shyam at provably dot ai.