ImageNet21K icon indicating copy to clipboard operation
ImageNet21K copied to clipboard

Single label pretraining on in21k using ViT

Open cissoidx opened this issue 3 years ago • 0 comments

Hi, I have seen that you have updated single label pretraining script on in21k. This is really great work. I have some questions about pretraining ViT:

  1. The default setting is for tresnet_m, do you have the configs for vit-b-16? Or it is actually the same?
  2. What is the accuracy of the validation set in single label pretraining? In the table of your readme file, I see that using semantic loss, vit reaches 77.6% and further finetuning on in1k reaches 84.4%. But what about single label pretrained models?

cheers,

cissoidx avatar Aug 31 '21 08:08 cissoidx