equiformer_v2
equiformer_v2 copied to clipboard
[ICLR 2024] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations
Thank you for the great work EquiformerV2. When I test its performance on MD17/22 dataset, I find it lags far behind SOTA models like VisNet. For example, in MD22_AT_AT, when...
Hello, thank you for sharing your great work! I was wondering if it would be possible for you to share the training logs, similar to how the logs were provided...
Hi, I am trying to integrate this with the e3nn package. For the SO3Embedding class, how can I convert that to an irrep which is compatible with the convention e3nn?...
Hi, nice work! I was wondering what it would take to accommodate for systems with nodes that have additional non-scalar features. Any hints or snippets would be greatly appreciated. Thanks.
As the title states,How to use the v2 to train IS2RE datasets? I think I need some tutorials.
Hi, I've heard of this strong model which can learn atomic coordinates. Now I want to adapt this model for my project, but I find the code is a bit...
I wanted to train this network on the spice dataset (similar task where I want to predict forces and energy from structure). I was comparing speed of training with `torchmd-net`...
Hi @yilunliao, Thanks for the nice codebase - I am adapting it for another purpose, and I was running into some issues when checking the outputs are actually equivariant. Are...
Could you provide instructions on how to run experiments under the multi-node multi-gpu setting without using the submitit? For example, I have 2 nodes, each of which contains 16 gpus....
Hi , not issues but some questions. 1. Any comparison of the performace to some other SOTA equvariant nets such as MACE or Nequip or sth ? 2. Is MD...