CSS-VQA
CSS-VQA copied to clipboard
Counterfactual Samples Synthesizing for Robust VQA
我改成了import _pickle as cPickle 然后在数据预处理时,报错 vals = list(map(float, vals[1:]))  但是在运行main 时又出现新的编码问题 utf-8  通过 fe=torch.load('data/rcnn_feature/'+str(img_id)+'.pth', encoding="ISO-8859-1")['image_feature'] classifier nn.Dropout(dropout, inplace=False) 又跑通了 最大 57.98 与文中的最好值相差1个点,是正常现象吗? 
What are the parameters of the best result of the model? I can't reproduce similar results. Thank you very much for your work!
When I tired to run `CUDA_VISIBLE_DEVICES=0 python main.py --dataset cpv2 --mode q_v_debias --debias learned_mixin --topq 1 --topv -1 --qvp 5 --output [] --seed 0`, it went error. @yanxinzju
Thank you very much for your excellent work. I would like to ask you about the parameter entry["bias"] when processing the dataset. What does it refer to and how did...
i want to know if i want to use other picture to test this code ,how can i get the feature. can you give us the code about this?
Hi, nice work and I am interested in it. In train.py, there are a few lines of code I do not understand: ``` m1 = copy.deepcopy(sen_mask) ##[0,0,0...0,1,1,1,1] m1.scatter_(1, w_ind, 0)...
Hi, can you share the code for spacy to extract the nouns in the question? Thank you so much