a-PyTorch-Tutorial-to-Image-Captioning icon indicating copy to clipboard operation
a-PyTorch-Tutorial-to-Image-Captioning copied to clipboard

seqs = torch.cuda.FloatTensor(torch.cat([seqs[prev_word_inds], next_word_inds.unsqueeze(1)], dim=1))# (s, step+1) IndexError: tensors used as indices must be long, byte or bool tensors

Open KhaaQ opened this issue 5 years ago • 2 comments

While running eval.py, this error occurs. Any help to change the type of tensor. I tried different types of tensor types but still same error occur.

EVALUATING AT BEAM SIZE 1: 0%| | 0/25000 [00:00<?, ?it/s]k previous words tensor([[9488]], device='cuda:0') seq tensor([[9488]], device='cuda:0') EVALUATING AT BEAM SIZE 1: 0%| | 0/25000 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/Documents/Pytorch_IC_srgv_Nov5/a-PyTorch-Tutorial-to-Image-Captioning-master/eval.py", line 182, in print("\nBLEU-4 score @ beam size of %d is %.4f." % (beam_size, evaluate(beam_size))) File "/home/Documents/Pytorch_IC_srgv_Nov5/a-PyTorch-Tutorial-to-Image-Captioning-master/eval.py", line 130, in evaluate

seqs = torch.cuda.FloatTensor(torch.cat([seqs[prev_word_inds], next_word_inds.unsqueeze(1)], dim=1))# (s, step+1) IndexError: tensors used as indices must be long, byte or bool tensors

KhaaQ avatar Dec 09 '20 07:12 KhaaQ

Try to add .long() to a couple of tensors to transform them: seqs = torch.cat([seqs[prev_word_inds.long()], next_word_inds.unsqueeze(1)], dim=1) # (s, step+1)

This line and a few others below.

khaxis avatar Dec 11 '20 00:12 khaxis

I have the same problem.I solved it when I saw your suggestion and thank you.

BeBeYourLove avatar Mar 18 '22 04:03 BeBeYourLove