nequip
nequip copied to clipboard
🌟 [FEATURE] OpenMM
Is your feature request related to a problem? Please describe. Using LAMMPS can be a bit of a pain.
Describe the solution you'd like OpenMM is a really easy to use, Pythonic MD code that also allows a user-defined set of forces and runs ridiculously fast on one GPU. It seems they are also staring to incorporate some ML methods. It would be amazing if one could run NEQUIP MD with OpenMM.
Describe alternatives you've considered ASE is an option, but wouldn't get the speed of a GPU-accelerated MD code. OpenMM will JIT user-defined forces to the GPU.
Left an question on OpenMM's github about how this would work, and here is the reply: https://github.com/openmm/openmm-ml/issues/48#issuecomment-1363274507 I am about to see how hard this would be for a high-level user to do themselves. Any help would be appreciated!
Hi @asedova ,
Thanks very much for your interest in our methods and working on this integration!
We previously had some discussion about this (https://github.com/mir-group/nequip/issues/76) and the great news is that the main thing that was missing---a PyTorch neighborlist--- has now been implemented very nicely by Felix Musil with good performance and what appear to be pretty rigorous correctness tests: https://github.com/felixmusil/torch_nl. So it's now the perfect time for a "high-level user" to get this done finally! 🎉
Reading @peastman 's comment in the linked thread, I think the best way forward would be a 1-to-1 translation of the OpenMM-ML ANI interface https://github.com/openmm/openmm-ml/blob/main/openmmml/models/anipotential.py.
It would be something like:
- Load a deployed NequIP model using
nequip.scripts.deploy.load_deployed_model
(nequip.ase.NequIPCalculator
is a good example of this); the result is a TorchScript model. - Similar to
ANIForce
, define a wrapper around the deployed NequIP model that:- Uses
torch_nl
to constructedge_index
for NequIP from the positions and, optionally, cell + PBC. It is important to setedge_cell_shift
correctly as well, see here for how we do this with thease
neighborlist. For complete PBC correctness, make sure to replicate the self-interaction filtering logic as well. - Builds a NequIP input dictionary (
Dict[str, torch.Tensor]
, keys can be found here). Necessary keys arepos
,edge_index
,atom_types
and, if PBC are used,cell
andedge_cell_shift
. - Somehow map OpenMM atom types / species into the NequIP
nequip.data.AtomicDataDict.ATOM_TYPES_KEY
0-based index. See again (nequip.ase.NequIPCalculator
for an example of how to do this. I'm not super familiar with how the OpenMM type machinery works, but basically there should be a user-defined mapping of OpenMM types into NequIP type names, which the code then internally uses to create a mapping of OpenMM types to NequIP type numbers. - Call the deployed NequIP model with this dict as input, and get the total energy and forces from the result:
model(in_dict)[nequip.data.AtomicDataDict.TOTAL_ENERGY_KEY / FORCE_KEY]
- Do any unit conversions according to the user's provided conversions
- Return this total energy and forces, see https://github.com/openmm/openmm-torch#computing-forces-in-the-model
- Uses
-
torch.jit.script
this whole thing, thentorch.jit.save
it to a tempfile and pass it along to OpenMM-Torch.
It would be very nice to see this PRed into OpenMM-ML; please let me know if you have more questions, and feel free to ask/open PRs (off develop
) here if there are any additions to nequip
needed to make it work.
OK @asedova I've edited the above to reflect more recent versions of OpenMM-Torch 🙂
Thanks so much! I am going to try to see what I can do. I just started looking at NEQUIP after some time with DeePMDkit, so it may take me some time to work through all this. So happy there is momentum in this direction!
Great! Let me know if you have any questions. It looks like a lot but I think it should hopefully be straightforward, mostly "copy-paste" style coding.
there should be a user-defined mapping of OpenMM types into NequIP type names, which the code then internally uses to create a mapping of OpenMM types to NequIP type numbers.
OpenMM doesn't have any fixed set of types. It treats atom types as a property of a particular force field. addForces()
receives a Topology object which contains chemical information: elements, bonds, residue names, etc. No set of atom types have been applied to them.
How does NequIP define atom types?
It treats atom types as a property of a particular force field.
That sounds quite compatible.
In nequip
, atom types are just 0-based integer indexes of which there can be arbitrarily many. Each is also associated with a string name for usability. What they mean is specific to a given trained model and how the user did the training, but typically the names just correspond to element symbols. So clearly there should be some kind of user-defined mapping from OpenMM "groups" of atoms (however that is mostly naturally defined) to nequip
type names; the user is then responsible for making sure this mapping makes sense for the model they are using.
Similarly, nequip
places no restriction on system of units, as long as they are self-consistent; a trained model has the units of the data used to train it. So it's also necessary to allow the user to supply the unit (conversion) from the units of the particular model they are using into kJ/mol for OpenMM.
@peastman what troubles me is that with NEQUIP (and other similar codes like DeePMDkit) there is no topology, no bonds, no residues. And for good reason-- since it's trained on DFT, these all can actually change during a simulation, just like in AIMD. I'm not super familiar with ANI, but with these codes, input file would just be atom type (element) + features in, then the model gets energy + forces on each atom. The most compatible traditional input file to use as an example would be an xyz file, like CP2K uses.
receives a Topology object which contains chemical information: elements, bonds, residue names, etc.
I think as a first solution, you can just ignore all of that, take the elements, convert them to chemical symbols, and use those as NequIP type names? For most models this will be sufficient... what is missing would be something like how do you specify a model that treats, say, the oxygen atoms in one molecule as different from the oxygen atoms in a second molecule. That is clearly some kind of a function of the topology, I'm just not sure in OpenMM language how you let the user specify that.
That would be ok-- for now, we can let the model decide how the oxygen will act in different contexts, which is what I planned to start with anyway. I also think OpenMM would be ok if the topology object is not coming from any of the normal topology files from packages like AMBER and GROMACS. OpenMM in general allows a good amount of user-defined things, it's just not how I normally use it so I need to do some reading up. Here's an example: https://gpantel.github.io/computational-method/LJsimulation/
what is missing would be something like how do you specify a model that treats, say, the oxygen atoms in one molecule as different from the oxygen atoms in a second molecule.
In any model that allows forming and breaking bonds, I think the only option is to equate atom types with elements. Everything else can change with time. If the molecules are required to stay fixed over a simulation, that does allow incorporating more information into the atom types. I've experimented with including formal charges and/or hybridization states into the definitions of types, and it does seem to help accuracy.
Most force fields in OpenMM identify atom types by matching templates to residues. That isn't required though. For example, OpenFF uses a completely different mechanism of assigning parameters based on matching SMIRKS patterns (and not involving atom types at all). For ML models, you can build your model however you want. The PyTorch model and the Python code to build it are both completely in your control.
👍 I vote then to start with just taking OpenMM element -> chemical symbol as NequIP type name, possibly with an option of letting the user provide their own function that maps OpenMM topology -> NequIP type names, since every trained NequIP model will be different in this way.
Hi @asedova ,
Just curious if you'd had any luck with this. Thanks!
Hi @asedova @Linux-cpp-lisp how is progress on this going? I am an RSE working with OpenMM, I have time available to help write/write a wrapper in openmm-ml for NequIP. Let me know if you have any questions and how far you have got with this!
Hey all-
Progress is slow, since we have never used NEQUIP that much. We are much more familiar with DeePMD. I should be able to get back to this soon; we are finishing up some DeePMD work. @sef43 I was wondering if there would be a way to pass the virial/stress tensor to OpenMM, based on this: https://github.com/mir-group/nequip/discussions/291#discussioncomment-4672198
@sef43 hi, thank you for reaching out! :wave: We'd love to see this happen, so your help and hopefully leadership on it as an RSE who knows OpenMM well would be fantastic.
I think all my relevant thoughts are collected in the post above (https://github.com/mir-group/nequip/issues/288#issuecomment-1363486467), but please feel free to get in touch either here or by email (see my GitHub profile) if there is more we should discuss.
@asedova no worries; do you have any code we ought work off, or would it be better for @sef43 to start a fresh repo / fork to work on?
(I'm assuming @sef43 that, as laid out above, all of this wrapper should live in the fork of openmm-ml and eventually be merged there, but if there is code that we think is more appropriate in our repos we can make that happen easily too.)
I've just added a few more notes to the post above with information on how we can go about this, FYI.
Regarding stress/virial @asedova, my understanding is that OpenMM doesn't support or need it: https://github.com/openmm/openmm-torch/issues/91#issuecomment-1359169477
I would suggest just starting with a basic box of water example first, as done in the nature comms paper. Our systems are really difficult and we are still working out some of the science. Has getting a condensed-phase NNP to work with OpenMM happened before? I am confused because it seems there is a lot of gas phase work within ANI/TorchANI
Sure--- I think, however, that the science questions that you bring up are fairly decoupled from the software-level interfacing questions, which are fairly mechanical and binary (either they correctly transfer the predictions of the neural network potential or not, regardless of how useful / accurate that potential model is)
At the software level, I think the biggest difficulty moving from gas to condensed phase is PBC, but torch-nl
should handle that as described above.
I will start doing this then!
(I'm assuming @sef43 that, as laid out above, all of this wrapper should live in the fork of openmm-ml and eventually be merged there, but if there is code that we think is more appropriate in our repos we can make that happen easily too.)
Yes I will make a fork with the aim that it ends up merged into openmm-ml main, It shouldn't require any changes to the nequip repo. You steps in the comment above are very clear thank you, should be most of what I need, I will let you know when I have questions.
As @Linux-cpp-lisp has said there should be no major reasons why condensed phase simulations wouldn't work provided an appropriately trained neural network is used. And yes OpenMM uses a Monte-Carlo barostat for NPT simulations, this means it does not do, and does not need, a virial calculation.
Fantastic! Thanks again @sef43 , looking forward to having this!
It shouldn't require any changes to the nequip repo. You steps in the comment above are very clear thank you, should be most of what I need, I will let you know when I have questions.
Sounds good, feel free to reach out any time here or by email as is convenient for you.
@Linux-cpp-lisp I have a WIP fork branch here: https://github.com/sef43/openmm-ml/tree/nequip
I have done a first implementation. It works for the Toluene example, I have put an example here, which can be run in colab
It uses a trained deployed model I ran using your example nequip-train configs/example.yaml
Things I still need to do:
- Implement PBCs properly. Can you provide me with a test case that is a liquid/condensed phase? I will need to be able to run the training, as there are some issues with pytorch versions (more below)
- cleanup/check the way atom types/atomic numbers are used
- test on GPUs
- deal with different unit conversions, currently angstrom and kcal/mol from toluene example are hardcoded
Some issues:
-
I think the version of torch used by nequip that comes from pip (
pip install torch
) is not compatible with the version openmm-torch uses (conda install -c conda-forge pytorch
). I got around this by using nequip to train the toluene example with conda-forge pytorch. -
when i use
nequip.scripts.deploy.load_deployed_model
I get errors which I have put below. If I instead just use torch.jit.load directly then it works. Do you know why this might be the case? It looks likeload_deployed_model
also just callstorch.jit.load
pytorch error from using nequip.scripts.deploy.load_deployed_model:
/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/nequip/utils/_global_options.py:58: UserWarning: Setting the GLOBAL value for jit fusion strategy to `[('DYNAMIC', 3)]` which is different than the previous value of `[('STATIC', 2), ('DYNAMIC', 10)]`
warnings.warn(
Traceback (most recent call last):
File "/Users/stephen/Documents/nequip/openmm-ml/example/run_nequip.py", line 19, in <module>
system = potential.createSystem(pdb.topology)
File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/openmmml/mlpotential.py", line 178, in createSystem
self._impl.addForces(topology, system, None, 0, **args)
File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/openmmml/models/nequippotential.py", line 169, in addForces
module = torch.jit.script(nequipforce)
File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_script.py", line 1286, in script
return torch.jit._recursive.create_script_module(
File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_recursive.py", line 458, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_script.py", line 615, in _construct
init_fn(script_module)
File "/Users/stephen/miniconda3/envs/neqmm/lib/python3.10/site-packages/torch/jit/_recursive.py", line 500, in init_fn
cpp_module.setattr(name, scripted)
RuntimeError: Could not cast attribute 'model' to type __torch__.torch.jit._script.RecursiveScriptModule (of Python compilation unit at: 0x6000032964f8): Expected a value of type '__torch__.torch.jit._script.RecursiveScriptModule (of Python compilation unit at: 0x6000032964f8)' for field 'model', but found '__torch__.nequip.nn._rescale.___torch_mangle_0.RescaleOutput (of Python compilation unit at: 0x60000326ee58)'
Exception raised from setattr at /Users/runner/miniforge3/conda-bld/pytorch-recipe_1664817728005/work/torch/csrc/jit/api/object.h:67 (most recent call first):
frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 92 (0x110c0559c in libc10.dylib)
frame #1: torch::jit::Object::setattr(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, c10::IValue) + 1312 (0x1344d6794 in libtorch_python.dylib)
frame #2: void pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_3, void, torch::jit::Object&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::object, pybind11::name, pybind11::is_method, pybind11::sibling>(torch::jit::initJitScriptBindings(_object*)::$_3&&, void (*)(torch::jit::Object&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, pybind11::object), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::'lambda'(pybind11::detail::function_call&)::__invoke(pybind11::detail::function_call&) + 840 (0x1346775e8 in libtorch_python.dylib)
frame #3: pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3464 (0x13400e71c in libtorch_python.dylib)
frame #4: cfunction_call + 80 (0x10434ca68 in python3.10)
frame #5: _PyObject_MakeTpCall + 612 (0x1042fa050 in python3.10)
frame #6: method_vectorcall + 268 (0x1042fd770 in python3.10)
frame #7: call_function + 524 (0x1043e8e38 in python3.10)
frame #8: _PyEval_EvalFrameDefault + 26388 (0x1043e4bc8 in python3.10)
frame #9: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #10: call_function + 524 (0x1043e8e38 in python3.10)
frame #11: _PyEval_EvalFrameDefault + 26500 (0x1043e4c38 in python3.10)
frame #12: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #13: call_function + 524 (0x1043e8e38 in python3.10)
frame #14: _PyEval_EvalFrameDefault + 26388 (0x1043e4bc8 in python3.10)
frame #15: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #16: call_function + 524 (0x1043e8e38 in python3.10)
frame #17: _PyEval_EvalFrameDefault + 26500 (0x1043e4c38 in python3.10)
frame #18: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #19: call_function + 524 (0x1043e8e38 in python3.10)
frame #20: _PyEval_EvalFrameDefault + 26388 (0x1043e4bc8 in python3.10)
frame #21: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #22: call_function + 524 (0x1043e8e38 in python3.10)
frame #23: _PyEval_EvalFrameDefault + 26388 (0x1043e4bc8 in python3.10)
frame #24: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #25: method_vectorcall + 516 (0x1042fd868 in python3.10)
frame #26: _PyEval_EvalFrameDefault + 27276 (0x1043e4f40 in python3.10)
frame #27: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #28: call_function + 524 (0x1043e8e38 in python3.10)
frame #29: _PyEval_EvalFrameDefault + 26348 (0x1043e4ba0 in python3.10)
frame #30: _PyEval_Vector + 2056 (0x1043ddba8 in python3.10)
frame #31: run_mod + 216 (0x104438d7c in python3.10)
frame #32: _PyRun_SimpleFileObject + 1264 (0x10443881c in python3.10)
frame #33: _PyRun_AnyFileObject + 240 (0x1044377f4 in python3.10)
frame #34: Py_RunMain + 2340 (0x10445c8ac in python3.10)
frame #35: pymain_main + 1272 (0x10445da68 in python3.10)
frame #36: main + 56 (0x1042a3dc8 in python3.10)
frame #37: start + 2544 (0x198003e50 in dyld)
@sef43 fantastic! That was quick 😆 , thanks!
Regarding the issues, sort of in order:
- I've now removed on
develop
the explicit dependence ofnequip
ontorch
so that the user's installation takes precedence, sincepip
is really bad about trying to override your existing, correct PyTorch installation. We always use PyTorch fromconda
instead, but also always take from PyTorch's own channels, something like:conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
... but I'm not sure how that would interact with OpenMM-torch's dependencies. - We support presently PyTorch 1.11 and 1.13 (2.0 should work once released but is untested; 1.12 is broken due to PyTorch bugs).
- Using
nequip.scripts.deploy.load_deployed_model(..., freeze=False)
should fix it.
Implement PBCs properly. Can you provide me with a test case that is a liquid/condensed phase? I will need to be able to run the training, as there are some issues with pytorch versions (more below)
Two easy options would be:
- Simple test data on a bulk metal from the EMT classical potential (see
minimal_toy_emt.yaml
):
dataset: EMTTest # type of data set, can be npz or ase
dataset_element: Cu
dataset_supercell: [4, 4, 4]
dataset_num_frames: 50
chemical_symbols:
- Cu
- The silver dataset from our Allegro paper (again a one species bulk metal), if you prefer real data:
Ag_warm_nospin.xyz
(https://archive.materialscloud.org/record/2022.128)
Both should be in units of eV and Å. If it's easier, I'm happy to train a model quickly and email it to you as well if you prefer.
cleanup/check the way atom types/atomic numbers are used
Am I right in understanding that in OpenMM atom type == atomic number, and that OpenMM only reasons about atoms in terms of their atomic numbers? Is that why you have it set up to possibly allow mapping multiple nequip
types to one OpenMM atomic number?
deal with different unit conversions, currently angstrom and kcal/mol from toluene example are hardcoded
👍 yep, NequIP/Allegro is entirely agnostic to units and preserves those of the training data (as you've seen), so a user-facing option in the OpenMM wrapper seems appropriate.
A finally, some random things:
https://github.com/sef43/openmm-ml/blob/nequip/openmmml/models/nequippotential.py#L128
batch
is not required when there is only one example in the batch, and should be omitted.
Am I right in understanding that in OpenMM atom type == atomic number, and that OpenMM only reasons about atoms in terms of their atomic numbers? Is that why you have it set up to possibly allow mapping multiple
nequip
types to one OpenMM atomic number?
I wouldn't read anything into what I have currently done, I need to understand a bit more about the nequip types!
Each particle in OpenMM is an Atom
object which will be a specific Element
, so you can go atom.element.atomic_number
to get the atomic number and atom.element.symbol
to get the chemical symbol.
As I understand nequip types are zero indexed and user defined. So there can be multiple types for the same chemical element. OpenMM is not going to know about this from reading in files (it's own concept of atom types and atom classes is tied to which ForceField you are using). The most flexible way would be to explicitly specify an array of length Natoms containing the nequip types of each atom and pass this to the MLP constructor, but not very user friendly
We support presently PyTorch 1.11 and 1.13 (2.0 should work once released but is untested; 1.12 is broken due to PyTorch bugs).
Ah ok I will avoid PyTorch 1.12 then
As I understand nequip types are zero indexed and user defined. So there can be multiple types for the same chemical element.
This is correct, yes.
The most flexible way would be to explicitly specify an array of length Natoms containing the nequip types of each atom and pass this to the MLP constructor, but not very user friendly
I see--- that sounds like a useful, optional choice to have, but I agree that it is an unpleasant interface for the common cases, which are best handled with a mapping. Probably best to restrict the mapping to one-to-one atomic numbers to types and let more complicated cases be handled explicitly, though, to avoid easy but hard to debug issues.
(Incidentally, the common case for nequip
is to use our chemical_symbols
option, where the user gives something like:
chemical_symbols:
- H
- C
- O
which then sorts them by atomic number (https://github.com/mir-group/nequip/blob/main/nequip/data/transforms.py#L30-L36) and assigns consecutive atom types starting at zero. The other common case is chemical_symbol_to_type
, where the mapping of atomic number to type is explicitly given. These are both train-time options, and are intentionally not propagated to deployed models, where users are expected to be explicit about their simulation setup to avoid challenging bugs for a mismatch of expectations around "reasonable" behavior.)
We can generate atom types in pretty much any way we want. The only requirement is that it has to be possible to determine them from the arguments to addForces()
. As long as that's true, we can write whatever code is needed.
I've now added PBC support, It seems to work for the minimal_toy_emt.yaml
system, where my definition of "working" is that it runs a stable NPT simulation, It need to be tested it on some real systems, and It should be compared to the lammps version.
I have set the NequIP types default behaviour to be a 1 to 1 mapping from the OpenMM atomic element/symbol, with the option of passing a list that explicitly defines the zero indexed NequIP types of each atom. And conversion factors from the model units to OpenMM units need to be given when constructing the potential.
It should work on both GPU and CPU
I have put an example of running a deployed toy EMT model with PBC and NPT: https://github.com/sef43/openmm-ml/blob/nequip/examples/run_nequip_pbc.ipynb
This is really awesome, thanks so much @sef43!
This is great @sef43 !!! :tada: Thanks so much for putting this together.
A small point--- is there a reason to always use the O(N^2) neighborlist? It looks from the benchmarks (https://github.com/felixmusil/torch_nl) that the O(N) becomes faster on GPU around 200 atoms, which I'd imagine would be a very straightforward heuristic to implement?
One other piece of relevant information to:
https://github.com/sef43/openmm-ml/blob/nequip/openmmml/models/nequippotential.py#L98-L100
nequip.scripts.deploy.load_deployed_model
will call torch.set_default_dtype
appropriately based on the metadata of the deployed model (in the current development/future stable) version of nequip
, so I think you might be able to address the TODO by moving the self.default_dtype
assignment after the load_deployed_model
.