TokenLabeling
TokenLabeling copied to clipboard
Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"
I am interested if there is any LV-ViT- model setup you have tested for Cifar10. I would like to know the proper setup of all blocks in none pretrained weights...
if I wanna to use 1p to train, how many batchsize I need to allocate? or there's the formula to compute?, please
Hi, Thanks for the wonderful work. Could you share with us the password to unzip LV-ViT-S pretrained model ? Thanks !
Hello ~ I'm interested in your token labeling technique, So I want to apply this technique in CNN based model because ViT is very heavy to train. can I get...
These slight code adjustments aim to enhance conciseness of the code by employing optimized NumPy functions.
 这样写的后果是 # rely more on target_cls if target_cls is incorrect.
Hi, I am curious about the problem of dimension inconsistency. (1) The shape of "score_map" that generated in [generate_label.py](https://github.com/zihangJiang/TokenLabeling/blob/5cc1461d0a07bc616f6b866313c2261dade44acc/generate_label.py#L271-L272) is [2, 5, H, W], but the dimension of score_maps seems...