ImageNet21K icon indicating copy to clipboard operation
ImageNet21K copied to clipboard

Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper

Results 19 ImageNet21K issues
Sort by recently updated
recently updated
newest added

Hi, I have seen that you have updated single label pretraining script on in21k. This is really great work. I have some questions about pretraining ViT: 1. The default setting...

What is the teacher model when using semantic softmax with KD? The figure in the paper is not clear on what the teacher is. Or in there is no code...

Nowhere on the paper or code where it mentions what Dropout prob and momentum value used for SGD when fine tuning on ImageNet1k. Is is the same as https://arxiv.org/pdf/2010.11929.pdf ?...

Good morning, thank you very much for your work. Can you share the hyperparameters/training procedure that have been used to perform the finetuning of resnet50 to standard ImageNet? Thank you...

Typically images are normalized with 0.5 mean and 0.5 std, but I don't see anywhere in the code where normalization with 0.5 mean and 0.5 std occurs. Is that supposed...

Thanks for your great work! I have a question when using your ResNet50 model as pretrained weights of Faster R-CNN in Detectron2: your 21K pretrained weights gives 8 point lower...

Hi @mrT23 @panchonoy Thanks for your contribution, but I still think it is inconvenience for users, could you please provide the `train.txt`, `val.txt` files like this ? (Constructed by image...

After train 21K pre-training, does label map exists for ImageNet-1K?

Hey, thanks for publishing the code and data for this project! I noticed that there is one class (n09450163, sun.n.01) which has no parent in the `child_2_parent` mapping. Should it...

Anyone here have trouble reaching the mentioned accuracy for ViT-B? For some reason, the best accuracy I can get is 77% top1 without KD. While in the paper they said...