HieCoAttenVQA icon indicating copy to clipboard operation
HieCoAttenVQA copied to clipboard

Results 17 HieCoAttenVQA issues
Sort by recently updated
recently updated
newest added

I got this error when split=2 while split=1 work very well. the command is : python vqa_preprocess.py --download 1 --split 2 python prepro_vqa.py --input_train_json ../data/vqa_raw_train.json --input_test_json ../data/vqa_raw_test.json --num_ans 1000 the...

how to preprocess openended question and annotation files

This is maybe a trivial question but I'm completely new to torch, I tried to search on Google but no luck. I'm working with a Ubuntu 14.04 machine, cuda 7.0...

Regarding https://github.com/jiasenlu/HieCoAttenVQA/blob/master/eval.lua, is it possible to use it for VQA evaluation? thank you.

When i train this model(split=1) on GPU(M40), the training speed is so slow that i can hardly wait for the result. The training speed is about 3789sec/600iter, and the batchsize...

Just came across your paper, and found that the formulation of co-attention is quiote similar to transformers: ![image](https://user-images.githubusercontent.com/1575461/189848213-203d8f4c-2664-4c59-966c-86433376bc3f.png) Especially, a few (but not all) major ingredients, i.e., Q, V projections,...