Skip to content

Conversation

@epretti
Copy link
Member

@epretti epretti commented Dec 8, 2025

Removes some TorchANI imports from classes. The change suggested in #44 doesn't actually fix the problem since the class bodies get executed whether or not instances are created. Since these are only used for type annotations, they can be removed, allowing NNPOps to be imported without having TorchANI installed.

@peastman
Copy link
Member

peastman commented Dec 9, 2025

Some tests are failing. I think the type annotations are required to compile it to torchscript.

What if we use a forward reference (putting the name in quotes)? Can we get it to work then?

@epretti
Copy link
Member Author

epretti commented Dec 9, 2025

I finally have a working environment with a compatible version of TorchANI installed locally, and I can't reproduce this; all the TorchANI-related tests pass for me. I'm noticing the same workflow run for #126 also has similar failures. Let me experiment with this and see if I can figure out what's happening. It's possible it's PyTorch version-related since 2.0 is passing, 2.5 is failing, and I am running 2.9 locally.

@peastman
Copy link
Member

peastman commented Dec 9, 2025

CI specifies an ancient version of TorchANI: 2.2, which was released in 2020. What if we use a newer version?

@epretti
Copy link
Member Author

epretti commented Dec 11, 2025

This is another issue I've found. There are a number of breaking changes to TorchANI such that NNPOps doesn't work with newer versions of it. This migration guide details some of the changes. Although it says that these changes are for TorchANI 2, for some reason I find that the changes that break NNPOps seem to be somewhere between v2.2 and v2.7. Furthermore, CI is using conda-forge to install the packages but TorchANI no longer supports conda-forge and newer versions are available only on PyPI.

Updating NNPOps to work with newer versions of TorchANI should probably be done, but I think that should be a different PR.

I am going to experiment with this one and see if I can figure out what's happening.

@epretti epretti marked this pull request as draft December 11, 2025 19:52
@epretti
Copy link
Member Author

epretti commented Dec 15, 2025

These errors may be because for some reason the CUDA 12/Python 3.13/PyTorch 2.5 CI is installing TorchANI 2.2, whereas the CUDA 11.8/Python 3.10/PyTorch 2.0 CI is installing TorchANI 2.2.3. The errors look like ones associated with a known TorchANI issue fixed in 2.2.2 (see #125, openmm/openmm-ml#23, aiqm/torchani#598). I don't know why the particular sets of constraints in CI are causing these versions of TorchANI to be installed.

I don't know whether or not we should try to figure out how to fix this if we want NNPOps to be able to work with a later version of TorchANI anyway. But to do the latter will require more work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants