hallucination topic
NeurHal
Visual Correspondence Hallucination: Towards Geometric Reasoning (Under Review)
LRV-Instruction
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
HallusionBench
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
FactCHD
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
FaceAttr
CVPR2018 Face Super-resolution with supplementary Attributes
awesome-Large-MultiModal-Hallucination
😎 up-to-date & curated list of awesome LMM hallucinations papers, methods & resources.
Awesome-LLM-Uncertainty-Reliability-Robustness
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
TruthX
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"