hallucinations topic
awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
T-RAGS
Trustworthy Retrieval Augmented Generation (RAG) with Safeguards
UHGEval
[ACL 2024] Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
awesome-hallucination-detection
List of papers on hallucination detection in LLMs.
llama2_aided_tesseract
Enhance Tesseract OCR output for scanned PDFs by applying Large Language Model (LLM) corrections, complete with options for text validation and hallucination filtering.
Cognitive-Mirage-Hallucinations-in-LLMs
Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"
DCR-consistency
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language Models
sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
Hallucination-Attack
Attack to induce LLMs within hallucinations
dtt-multi-branch
Code for Controlling Hallucinations at Word Level in Data-to-Text Generation (C. Rebuffel, M. Roberti, L. Soulier, G. Scoutheeten, R. Cancelliere, P. Gallinari)