-
Notifications
You must be signed in to change notification settings - Fork 19
Remove TorchANI imports for type annotations #127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Some tests are failing. I think the type annotations are required to compile it to torchscript. What if we use a forward reference (putting the name in quotes)? Can we get it to work then? |
|
I finally have a working environment with a compatible version of TorchANI installed locally, and I can't reproduce this; all the TorchANI-related tests pass for me. I'm noticing the same workflow run for #126 also has similar failures. Let me experiment with this and see if I can figure out what's happening. It's possible it's PyTorch version-related since 2.0 is passing, 2.5 is failing, and I am running 2.9 locally. |
|
CI specifies an ancient version of TorchANI: 2.2, which was released in 2020. What if we use a newer version? |
|
This is another issue I've found. There are a number of breaking changes to TorchANI such that NNPOps doesn't work with newer versions of it. This migration guide details some of the changes. Although it says that these changes are for TorchANI 2, for some reason I find that the changes that break NNPOps seem to be somewhere between v2.2 and v2.7. Furthermore, CI is using conda-forge to install the packages but TorchANI no longer supports conda-forge and newer versions are available only on PyPI. Updating NNPOps to work with newer versions of TorchANI should probably be done, but I think that should be a different PR. I am going to experiment with this one and see if I can figure out what's happening. |
|
These errors may be because for some reason the CUDA 12/Python 3.13/PyTorch 2.5 CI is installing TorchANI 2.2, whereas the CUDA 11.8/Python 3.10/PyTorch 2.0 CI is installing TorchANI 2.2.3. The errors look like ones associated with a known TorchANI issue fixed in 2.2.2 (see #125, openmm/openmm-ml#23, aiqm/torchani#598). I don't know why the particular sets of constraints in CI are causing these versions of TorchANI to be installed. I don't know whether or not we should try to figure out how to fix this if we want NNPOps to be able to work with a later version of TorchANI anyway. But to do the latter will require more work. |
Removes some TorchANI imports from classes. The change suggested in #44 doesn't actually fix the problem since the class bodies get executed whether or not instances are created. Since these are only used for type annotations, they can be removed, allowing NNPOps to be imported without having TorchANI installed.