HieCoAttenVQA
HieCoAttenVQA copied to clipboard
I got this error when split=2 while split=1 work very well. the command is : python vqa_preprocess.py --download 1 --split 2 python prepro_vqa.py --input_train_json ../data/vqa_raw_train.json --input_test_json ../data/vqa_raw_test.json --num_ans 1000 the...
how to preprocess openended question and annotation files
This is maybe a trivial question but I'm completely new to torch, I tried to search on Google but no luck. I'm working with a Ubuntu 14.04 machine, cuda 7.0...
Regarding https://github.com/jiasenlu/HieCoAttenVQA/blob/master/eval.lua, is it possible to use it for VQA evaluation? thank you.
When i train this model(split=1) on GPU(M40), the training speed is so slow that i can hardly wait for the result. The training speed is about 3789sec/600iter, and the batchsize...
Just came across your paper, and found that the formulation of co-attention is quiote similar to transformers:  Especially, a few (but not all) major ingredients, i.e., Q, V projections,...