examples
examples copied to clipboard
您好,我试着用别的数据集跑了下textual entailment部分的代码,与SNLI对比数据集的格式是一致,在最后一步运行train()的时候报错_IndexError: too many indices for tensor of dimension 1_,我不太清楚是哪里处理问题,另外在运行这行代码`iter(train_iter).__next__()`时我的输出与与您不太一样,结果如下: [torchtext.data.batch.Batch of size 256] [.premise]:[torch.cuda.LongTensor of size 44x256 (GPU 0)] [.hypothesis]:[torch.cuda.LongTensor of size 25x256 (GPU 0)] [.label]:[torch.cuda.LongTensor of size 256...
model = AttnClassifier(len(TEXT.vocab), embedding_dim, hidden_dim).to(device) 请问这一句报错的原因是什么: pytorch1.2 File "E:\软件\python\lib\site-packages\torch\nn\modules\module.py", line 432, in to return self._apply(convert) File "E:\软件\python\lib\site-packages\torch\nn\modules\module.py", line 208, in _apply module._apply(fn) File "E:\软件\python\lib\site-packages\torch\nn\modules\rnn.py", line 124, in _apply self.flatten_parameters() File...
Hi,I have a question in self-attention.ipynb about this code as below: out = out[:, :, :self.hidden_dim] + out[:, :, self.hidden_dim:] Why do you add the two hidden states of Bidirectional...