Alby M.
Alby M.
I'm happy to now be able to close this issue thanks to the great work of @sef43 over in issue #288, building on the nice work of @felixmusil in `torch_nl`.
This should be resolved with the above comment, just noting that at present our recommending PyTorch versions are 1.11 and 1.13; 1.12 is broken due to upstream PyTorch bugs.
Hi @felixmusil , I came back to this again now after a while and I thought I would commit it (I've added a commit off of current `develop` with you...
One vote for a CPU fallback for `torch.bincount`. Is there any reason, given the unified memory architecture, that every op not implemented on Metal cannot just fall back to the...
Hi @mhellstr , That would be something you'd implement as a custom loss function, see here: https://github.com/mir-group/nequip-example-extension You could either make a custom loss that directly depends on the force...
This looks like you've edited the code to include `logging.debug` calls in the model?
I see--- what hardware things? Please note the following upstream issue: https://github.com/mir-group/nequip/discussions/311. If you do *or do not* encounter this issue, please post in that thread so we can continue...
You could do inefficient MD by manually constructing `NequIPCalculator` from an uncompiled PyTorch model (build using `model_from_config` and `.load_state_dict` and then passed to the constructor, rather than `from_deployed_model`). **This will...
> Thank you, will try and get back with more details. Thanks. It's possible that there is a missing `@torch.jit.unused`, in which case a quick code change will make it...
No, you shouldn't need to.