MultimodalExplanations
MultimodalExplanations copied to clipboard
Code release for Park et al. Multimodal Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. in CVPR, 2018
Hi. I've been trying to train my own models for vqa-x, but when I try to generate explanations on models I trained using train.py, my vqa answers and explanations are...
Hi Seth, I am curious about what information does visual directory contain. This would be helpful to know as I want to train this model on a completely different dataset....
Hi, I cannot seem to find the feature directory nor any **softattention** file. Could you please tell us where can we find this? Thanks, Oana
Hi, Could you please what are the files v2_mscoco_train2014_annotations.json and the v2_mscoco_val2014_annotations.json meant to be in Annotations/ and the v2_OpenEnded_mscoco_train2014_questions.json and v2_OpenEnded_mscoco_val2014_questions.json files from the Questions/ ? I couldn't find...
Hi, For VQA-X, I was able to generate explanations using the pre-trained model, but I am a bit confused why it is necessary to give the `--exp_file` with the explanation...