PyTorch-Pretrained-ViT icon indicating copy to clipboard operation
PyTorch-Pretrained-ViT copied to clipboard

When Run [B_16, B_32, L_32 ] models, an error occurs because of position embedding.

Open sdh5349 opened this issue 3 years ago • 1 comments

First solved with "ViT(image_size=image_size)" <- add image_size

sdh5349 avatar May 06 '21 13:05 sdh5349

I also got the same error. Here is the error message: Traceback (most recent call last): File "step1_save_feature/save_feature2h5py.py", line 54, in original_stored_imgname, original_stored_feat = extract_features_torch(datadir, model, input_img_size=384) File "/home/namj/ShapeY/step1_save_feature/your_feature_extraction_code.py", line 42, in extract_features_torch output1 = model(img1.cuda()) File "/home/namj/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in call result = self.forward(*input, **kwargs) File "/home/namj/.local/lib/python3.6/site-packages/shapey/utils/modelutils.py", line 10, in forward x = self.features(x) File "/home/namj/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in call result = self.forward(*input, **kwargs) File "/home/namj/.local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/home/namj/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in call result = self.forward(*input, **kwargs) File "/home/namj/.local/lib/python3.6/site-packages/pytorch_pretrained_vit/model.py", line 24, in forward return x + self.pos_embedding RuntimeError: The size of tensor a (12) must match the size of tensor b (768) at non-singleton dimension 3

I resized the input image to 384 x 384

njw0709 avatar Nov 04 '21 17:11 njw0709