unilm
unilm copied to clipboard
Beit3 classification
Thanks for your inspiring and effective work. You achieved really great performance on imagenet-1k classification. As the paper mentioned, you treat classification as retrieval and do intermediate retrieval on imagenet-21k before fine-tuning. But we are wondering what you do for polysemy and same class names for different classes in the dataset imagenet-21k.
Looking forward to your reply.
Hi @amandaluof , we use the following code to preprocess the class names:
from nltk.corpus import wordnet as wn
syn = wn.synset_from_pos_and_offset('n', wordnet_id)
class_name = ", ".join(_.name() for _ in syn.lemmas())
The code and pre-trained models of BEiT-3 can be found at aka.ms/beit3.