ML_Decoder
ML_Decoder copied to clipboard
Official PyTorch implementation of "ML-Decoder: Scalable and Versatile Classification Head" (2021)
Hi, thanks for your amazing work! Here I encountered something really weird. I downloaded the `tresnet_l_stanford_card_96.41.pth` and tried to validate the result in stanford car datasets. It reached 99.69 for...
ML-Decoder的运行时间明显久于一层transformer,我只在我的多标签分类工程中解决速度异常问题,零样本部分代码尚未解决
When i try to train the model on MSCOCO2014, i have this error that i can't fix : AttributeError: 'TransformerDecoderLayerOptimal' object has no attribute 'self_attn'... what should i do ?
I tried to run the infer.py, after installing all the dependencies. I used the model "tresnet_l_COCO__448_90_0.pth" with 448 image size (default value) over the sample image provided with the code....
Are pretrained ViTs available?
Dear Sir, I am working on transformer models for multi-label image classification, and your paper titled "ML-Decoder: Scalable and Versatile Classification Head" attracted my attention. However I couldn't understand one...
I have a question regarding the evaluation of Openimages. I noticed that in your [open-source Annotationo file](https://github.com/Alibaba-MIIL/PartialLabelingCSL/blob/main/OpenImages.md), some validation categories have only 1~2 samples , making it difficult to accurately...
can u tell me the command to use the pretrained model for standford car
In your paper, you mentioned that "In practice, the projection layer can transform the queries to any desired output, making the self-attention module redundant". But self-attention has softmax, which means...