se3-transformer-pytorch icon indicating copy to clipboard operation
se3-transformer-pytorch copied to clipboard

Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.

Results 7 se3-transformer-pytorch issues
Sort by recently updated
recently updated
newest added

I am trying to run an example from the README. The code is: ``` import torch from se3_transformer_pytorch import SE3Transformer print('Initialising model...') model = SE3Transformer( dim = 512, heads =...

Is there a way to add an equivariant linear layer as a head? It would be great if you could put a minimal example.

Thank you for your work. I used your reproduced SE3 as a part of my model, but the current test effect is not very good. I guess it may be...

Hi, I use normally dataloader from PyG to handle my molecules dataset. Can you provide an example to make a real multistep epoch model please ? I run your sample...

Hi, as se3-transfromer beginner, I want apply it to process point in order to get point-wise feature. I run successfully the first demo, but I don't understand the paramater means....

Hi Lucidrains, Thanks for your great work :1st_place_medal: Do we need to keep or re-use this backup [data](https://github.com/lucidrains/se3-transformer-pytorch/tree/main/se3_transformer_pytorch/data)?

Thanks for your great job! The se3-transformer is powerful, but seems to be memory exhaustive. I built a model with the following parameters, and got "CUDA out of memory error"...