PyTorch-Pretrained-ViT icon indicating copy to clipboard operation
PyTorch-Pretrained-ViT copied to clipboard

Qs about your code compared with the orignal code.

Open elk-april opened this issue 3 years ago • 0 comments

Hi, I noticed that it: your code:

x = self.positional_embedding(x)  # b,gh*gw+1,d 
x = self.transformer(x)  # b,gh*gw+1,d

Vision Transformer(from https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/vit.py):

x += self.pos_embedding[:, :(n + 1)]
x = self.dropout(x)
x = self.transformer(x)

Actually, there are two differences:

  1. you don't use the dropout after positional_embedding
  2. the original positional_embedding is not used in the classification token

Could you please tell me the reasons for these changes? Looking forward to your reply, thanks very much.

elk-april avatar Apr 30 '21 02:04 elk-april