vit-pytorch
vit-pytorch copied to clipboard
Increase Performance
Hello @lucidrains , I use vit-transform for spesific data.Image size is 320x320 and number of classes equal to 2. I set parameters for my dataset and it reached %64.5 test accuracy.Have you any suggestion for parameters?Because I get average %83 test accuracy with other models.
efficient_transformer = Linformer( dim=256, seq_len=1024+1, # 7x7 patches + 1 cls-token depth=12, heads=8, k=64)
model = ViT( dim=256, image_size=320, patch_size=10, num_classes=2, transformer=efficient_transformer, channels=3, ).to(device)
Try increasing your dimensions to 512
Also increase the k to 256 at very least
Try increasing your dimensions to 512
Also increase the k to 256 at very least
Thanks for your quick reply.
@lucidrains can you share usage of Distillation method in notebook example? I cannot use for this method.