vit-pytorch
vit-pytorch copied to clipboard
Fine-tuning without loading position embedding from pre-trained model
Hi!
When fine-tuning in my own data set, can I just load the parameters of the encoder block from the pre-trained ViT-B?
Thanks!