Multimodal_Classification_Co_Attention
Multimodal_Classification_Co_Attention copied to clipboard
I have A question about model.py
when I run your code, I find that:
Multimodal_Classification_Co_Attention-master/model.py:
encoded_embeddings = torch.stack([a * b for a, b in zip(all_layer_embeddings, self.bert_weights)])
TypeError: only integer tensors of a single element can be converted to an index
but the type of " all_layer_embeddings " and "self.bert_weights" is <class 'str'> <class 'torch.nn.parameter.Parameter'>. And the value of " all_layer_embeddings " and "self.bert_weights" is "hidden_states" and Parameter containing:tensor([[ 0.5483], [ 0.3121],...] Can you help me solve this problem? Thanks!
Sure. I think the collate_fn takes care of this
for idx, x in enumerate(zip(*res)):
if isinstance(x[0], list):
res_.append(torch.LongTensor(x))
elif isinstance(x[0], str):
res_.append(torch.LongTensor([int(values) for values in x]))
Are you running the code end2end ?