ImageNet21K
ImageNet21K copied to clipboard
Single label pretraining on in21k using ViT
Hi, I have seen that you have updated single label pretraining script on in21k. This is really great work. I have some questions about pretraining ViT:
- The default setting is for
tresnet_m
, do you have the configs forvit-b-16
? Or it is actually the same? - What is the accuracy of the validation set in single label pretraining? In the table of your readme file, I see that using semantic loss, vit reaches 77.6% and further finetuning on in1k reaches 84.4%. But what about single label pretrained models?
cheers,