gaopengpjlab

Results 63 comments of gaopengpjlab

Please refer to the following guidance for ConvMAE fine-tuning. https://github.com/Alpha-VL/ConvMAE/blob/main/FINETUNE.md

Please follow the dataset format of Imagenet. Basically, you can need to build two folders named dog and cat then put images into the corresponding folder.

train ----dog ----cat val ----dog ----cat The folder structure is similar to the above illustration.

Thanks for your interest in ConvMAE. The checkpoint only contain encoder weight as only encoder weight is required for downstream tasks.

We will find a way to share the full model. Stay tuned.

We will update the FLOPs of classification in a few days. Please stay tuned.

This is a great research problem that is beyond the scope of ConvMAE. We will explore the pretraining of CNN backbones in the future.

Thanks for your interest. We are working on a new version of ConvMAE with faster pretraining and improved representation ability. Pretrained models ranging from small to huge shall be released....

We update the pretrained weights of ConvMAE V2 small, base, large and huge in the README.

We will update the details of ConvMAE V2 later.