pytorch-transformers-classification
pytorch-transformers-classification copied to clipboard
where is the positional embedding in the Bert model inputs
First thanks for sharing the code, it's really helpful!!
I have a question when I tried to use the pretrained Bert on my dataset for sentence classification. I realize that in Bert, the input feature should be consist of token embedding, segment embedding and position embedding. But I'm not seeing the positional embedding in your code. In run_model:
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2] if args['model_type'] in ['bert', 'xlnet'] else None, # XLM don't use segment_ids
'labels': batch[3]}
outputs = model(**inputs)
Or I might miss this detail, could you please tell me whether you implement this, and if so where exactly?
Thanks again and looking forward to your reply!