visDial.pytorch icon indicating copy to clipboard operation
visDial.pytorch copied to clipboard

errors during evaluation

Open sizhangyu opened this issue 6 years ago • 5 comments

I have got errors for all three evaluations. Some small errors such as undefined variables and indentations have been solved by myself but I still stuck here with other errors.

################################################################## For eval_D.py: $ python eval/eval_D.py --cuda --model_path ./save/HCIAE-D-MLE.pth --data_dir ./ Random Seed: 4090 => loading checkpoint './save/HCIAE-D-MLE.pth' ./data/vdl_img_vgg.h5 ./data/visdial_data.h5 ./data/visdial_params.json DataLoader loading: test Loading image feature from ./data/vdl_img_vgg.h5 test number of data: 40504 Loading txt from ./data/visdial_data.h5 Vocab Size: 8964 /usr/local/lib/python2.7/dist-packages/torch/nn/modules/rnn.py:38: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1 "num_layers={}".format(dropout, num_layers)) Loading model Success! Traceback (most recent call last): File "eval/eval_D.py", line 258, in atten = eval() File "eval/eval_D.py", line 164, in eval ques_hidden = repackage_hidden(ques_hidden, batch_size) File "/home/heming/Documents/2018summer/source_codes/visDial.pytorch/misc/utils.py", line 22, in repackage_hidden return tuple(repackage_hidden(v, batch_size) for v in h) File "/home/heming/Documents/2018summer/source_codes/visDial.pytorch/misc/utils.py", line 22, in return tuple(repackage_hidden(v, batch_size) for v in h) File "/home/heming/Documents/2018summer/source_codes/visDial.pytorch/misc/utils.py", line 22, in repackage_hidden return tuple(repackage_hidden(v, batch_size) for v in h) File "/home/heming/Documents/2018summer/source_codes/visDial.pytorch/misc/utils.py", line 22, in return tuple(repackage_hidden(v, batch_size) for v in h) File "/home/heming/Documents/2018summer/source_codes/visDial.pytorch/misc/utils.py", line 22, in repackage_hidden return tuple(repackage_hidden(v, batch_size) for v in h) File "/home/heming/Documents/2018summer/source_codes/visDial.pytorch/misc/utils.py", line 22, in return tuple(repackage_hidden(v, batch_size) for v in h) File "/home/heming/Documents/2018summer/source_codes/visDial.pytorch/misc/utils.py", line 22, in repackage_hidden return tuple(repackage_hidden(v, batch_size) for v in h) File "/home/heming/Documents/2018summer/source_codes/visDial.pytorch/misc/utils.py", line 22, in return tuple(repackage_hidden(v, batch_size) for v in h) File "/home/heming/Documents/2018summer/source_codes/visDial.pytorch/misc/utils.py", line 22, in repackage_hidden return tuple(repackage_hidden(v, batch_size) for v in h) File "/usr/local/lib/python2.7/dist-packages/torch/tensor.py", line 360, in iter raise TypeError('iteration over a 0-d tensor') TypeError: iteration over a 0-d tensor

################################################################ Another run for eval_D.py 8100/8101: mrr: 0.619977 R1: 0.478346 R5 0.791012 R10 0.880000 Mean 4.749852 Traceback (most recent call last): File "eval/eval_D.py", line 258, in R1 = np.sum(np.array(rank_all)==1) / float(len(rank_all)) NameError: name 'rank_all' is not defined

################################################################# For eval_G.py & eval_G_DIS.py, I got the same error: $ python eval/eval_G_DIS.py --cuda --model_path ./save/HCIAE-G-DIS.pth --data_dir ./ Random Seed: 2276 => loading checkpoint './save/HCIAE-G-DIS.pth' DataLoader loading: test Loading image feature from ./data/vdl_img_vgg.h5 test number of data: 40504 Loading txt from ./data/visdial_data.h5 Vocab Size: 8964 /usr/local/lib/python2.7/dist-packages/torch/nn/modules/rnn.py:38: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1 "num_layers={}".format(dropout, num_layers)) Traceback (most recent call last): File "eval/eval_G_DIS.py", line 98, in netG = _netG(opt.model, n_words, opt.ninp, opt.nhid, opt.nlayers, opt.dropout) TypeError: init() takes exactly 8 arguments (7 given)

sizhangyu avatar May 25 '18 00:05 sizhangyu

did you solve the problem? I also ran into this problem when trying to run eval_D.py. Thanks.

zhegan27 avatar Jun 27 '18 22:06 zhegan27

Which problem do you mean? Btw, there are much more bugs than I reported here. Since the author did not reply me, I fixed all the bugs and made it run.

sizhangyu avatar Jun 27 '18 23:06 sizhangyu

thanks for your response. I tried to downgrade the pytorch version to 0.1.12, then the problem "TypeError: iteration over a 0-d tensor" does not exist now. But when running other code, it still has other bugs inside. Which Pytorch version are you using? Would you mind sending your bug-free code to me? :) I am new to Pytorch. thanks in advance.

zhegan27 avatar Jun 28 '18 00:06 zhegan27

@sizhangyu looking forward to your reply. We can also communicate offline. My email is [email protected]. Thank you!

zhegan27 avatar Jun 28 '18 19:06 zhegan27

Which problem do you mean? Btw, there are much more bugs than I reported here. Since the author did not reply me, I fixed all the bugs and made it run.

when i run python3 eval/eval_G.py , i meet some errors like this : NameError: name 'mixture_of_softmaxes' is not defined could you please tell me how to solve this problem? Thank You very much! : ) Could you tell me your mailbox if you don't mind ?

jiangshiling avatar Oct 15 '18 09:10 jiangshiling