mPLUG-HalOwl
mPLUG-HalOwl copied to clipboard
mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating
Great work! Will you release the hallucination caption dataset used in HACL? Thanks.
Can you release the trained Llava-1.5-HACL model ?
 We have recently been exploring methods to reduce the modality gap in multimodal large models, but have been struggling to determine the most effective way to validate the efficacy...
Your work is really fascinating! Could you please provide the specific version of LLaVA (not 1.5) used in main experiments? Thank you!
In training stage 2 of HACL, what is 'sft_from_pretrain.json' training data like? Seems it's not released. Can you release this file or give out an example? --contrastive_data_path ./data/LLaVA-Pretrain/sft_from_pretrain.json \
Hello, that's a great work! May I ask if you can open-source the HACL trained LLaVA 1.5 model mentioned in your paper? It would be very helpful for my research.