escnn icon indicating copy to clipboard operation
escnn copied to clipboard

Pretrained models for ResNet in SO(2) and SE(3)

Open olayasturias opened this issue 1 year ago • 4 comments

Hello all,

I wonder if anyone has a pretrained model available for the equivariant ResNet from this example and the SE(3) equivariant model in this other example, using a bigger dataset like Imagenet or similar.

Thanks!

olayasturias avatar Aug 15 '23 10:08 olayasturias

Hi, there are existing methods which demonstrate that one does not need to retrain an equivariant version of ResNet (or other large pretrained models) to obtain pretrained equivariant ResNet, rather you can "adapt" a pretrained ResNet to be equivariant to a certain group with architecture-agnostic equivariance methods, such as canonicalization.

Please feel free to check out: Equivariant Adaptation of Large Pretrained Model, NeurIPS 2023. Although it shows great results for discrete groups in the image domain and continuous groups in point clouds and other tasks, there are a few challenges in adapting pretrained image models for continuous groups, which is a work in progress.

sibasmarak avatar Feb 14 '24 03:02 sibasmarak

Hi Siba, Thank you for your answer. That's a fascinating work! From what I understood, instead of training an equivariant Resnet from scratch, you preceded a ResNet50 with your canonicalization network and then fine-tuned the ResNet while training that canonicalization module. Is that correct? Do you have code examples of how did you make that work? I'm particularly interested in the network you implemented with the escnn library. Is it similar to the CNN under this notebook? How many layers - and in general, which hyperparameters- suited you well?

olayasturias avatar Feb 19 '24 14:02 olayasturias

Hi, thank you for taking a look at the paper! Yes, indeed. Note that you don't need to fine-tune per se (as we show in the case of the Segment-Anything Model, you can only train the equivariant canonicalization network to learn the identity orientation with prior regularization), and you would need a regularization loss to align the outputs of the canonicalization network and (pre-trained) dataset orientation.

We are planning a release of our user-friendly library before the end of February. We are adding examples and tutorials for people to get started with canonicalization. I will let you know once we release the library. A schematic of the pipeline is described in Figure 2.

Yes, the canonicalization networks are similar to the notebook you have linked. We give a small detail of hyperparameter tuning in Appendix Section B. We tune for different values of the number of layers, kernel sizes, dropout (switching off dropout generally helped), and learning rates. Anyways, the canonicalization networks are extensively small compared to the actual pretrained model under consideration, which makes it lucrative (some parameter sizes are highlighted in Table 3).

sibasmarak avatar Feb 19 '24 21:02 sibasmarak

I pretrained some equivariant ResNets on ImageNet-1k. The models and weights can be found here.

The canonicalization approach is appealing since it can be applied to any pre-trained method. I haven't had a chance to compare against it yet, but I'm curious if there is any performance gap.

dmklee avatar Apr 04 '24 15:04 dmklee