Support numpy2
Purpose
Closes #408.
I removed the runtime requirement of numpy<2, and instead always build against numpy2 as suggested by numpy docs.
Changes
- support numpy2
- build against numpy2
- update IPOPT C/Python interface using numpy/tools/replace_old_macros.sed
- added matrix test for CI on Windows
- added
setuptoolsas a runtime dependency (missing dependency; unrelated to numpy2 fixes).
Expected time until merged
As long as it takes for CI to pass. While we don't test against numpy2 here (we could add CI for it for windows), we will be testing it on conda in the future.
Type of change
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (non-backwards-compatible fix or feature)
- [ ] Code style update (formatting, renaming)
- [ ] Refactoring (no functional changes, no API changes)
- [ ] Documentation update
- [x] Maintenance update
- [ ] Other (please describe)
Testing
Checklist
- [ ] I have run
flake8andblackto make sure the Python code adheres to PEP-8 and is consistently formatted - [ ] I have formatted the Fortran code with
fprettifyor C/C++ code withclang-formatas applicable - [ ] I have run unit and regression tests which pass locally with my changes
- [ ] I have added new tests that prove my fix is effective or that my feature works
- [ ] I have added necessary documentation
Hmm so all the builds are failing due to the numpy lock file. Not sure how we should address this, since build-time dependency needs to be 2.0 but runtime version should be the one pinned.
Hmm so all the builds are failing due to the numpy lock file. Not sure how we should address this, since build-time dependency needs to be 2.0 but runtime version should be the one pinned.
I think the fix for this would be to create a separate .yml file that gets activated after pyoptsparse has been built. At least, locally, I was able to (partially) fix the Windows build that uses numpy>=2.0 in the conda environment by creating a separate .yml file that I activated at runtime, and only three tests failed:
The following tests failed:
test_hs015.py:TestHS15.test_optimization_2_CONMIN
test_hs071.py:TestHS71.test_optimization_4_CONMIN
test_rosenbrock.py:TestRosenbrock.test_optimization_3_CONMIN
Passed: 38
Failed: 3
Skipped: 34
Ran 75 tests using 1 processes
Wall clock time: 00:02:8.67
I'm still familiarizing myself with the project, so I don't know why those three tests are failing, but I can push the commit on the changes I've made if you'd like @ewu63.
Let me clarify - my comment above was regarding the Docker-based Linux tests. We use a pip constraint file in our toolchain and that's creating some issues. This is an issue with our toolchain and something for us maintainers (CC @eirikurj) to sort out.
The step you mentioned is indeed necessary, and already done as part of this PR -- see here. The test is failing due to some regression errors that I have not had time to look into (which you can view by opening the test logs on GHA), and is unrelated to the build-time issues discussed above. Interestingly though, those errors are different from what you're seeing locally.
Ah, I see. Thanks for that clarification.
As another data point then, when I initially pulled the branch, and tried to run the tests I got the same thing that is in the build-windows action log. I wonder if the discrepancy between there and what I am seeing locally is something to do with using the same environment and just installing a different version of numpy into it again. I know that from my experience using conda, sometimes the package manager won't upgrade/downgrade dependencies in an environment, so I wonder if when pyoptsparse is being built with numpy>=2.0 and then gets tested with a lower version it leaves the packages that numpy>=2.0 brought in because the package manager thinks it's okay. Which can cause weird things to happen.
Just a friendly ping to see if there are any updates on this PR. Looks like the assert in line 148 in tests.test_hs015.test_ipopt is a bit optimistic with numpy=2? It now takes 45 iterations instead of 12. Is that a deal breaker? I can submit a PR that changes this quantity depending on the version of numpy if that would help, but I'm not too familiar with what is being tested here.
Codecov Report
Attention: Patch coverage is 50.00000% with 1 line in your changes missing coverage. Please review.
Project coverage is 74.98%. Comparing base (
f66e9fb) to head (d28175c).
| Files with missing lines | Patch % | Lines |
|---|---|---|
| pyoptsparse/pyNSGA2/pyNSGA2.py | 0.00% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #411 +/- ##
=======================================
Coverage 74.98% 74.98%
=======================================
Files 22 22
Lines 3338 3338
=======================================
Hits 2503 2503
Misses 835 835
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
🚀 New features to boost your workflow:
- ❄ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
Okay, looks like I got past the pip-constraints file issue (which is frankly a bit hacky, CC @eirikurj). The Windows build (which is the only one testing numpy2 right now) is failing in a couple places, which probably requires further investigation. Hopefully no f2py shenanigans but we shall see. Suggested next steps for debugging:
- I am not able to reproduce the numpy2 failures on my Linux machine locally, but maybe we should do this in Docker and see if we reproduce any of it.
- Determine if there is significant change in the behaviour of some optimizers. Maybe the heuristics need to be changed.
In any case, I doubt I have time to work on this in the next few weeks, so I suggest someone in the lab take a look, maybe @marcomangano or @eirikurj?
Ok I added numpy2 to the test matrix for windows but cannot get it to pass. In order to not hold up this PR any further I suggest that we push this through, and try to get the windows test coverage through conda-forge. Windows is technically not an official support platform anyhow so I think I'd rather get numpy2 to downstream users first.
As for versions, I suggest that we release the current main branch as 2.12.1, and release this PR as 2.12.2 in order to isolate some of the recent changes.
@ewu63 any updated on this ? If you could add me to your fork, I could help out with this.
It's holding us back in topfarm team to migrate to numpy>=2.0
Is there anything I can do to help with this?
Alright, I think I've reached the limit of my non-existent windows compilation knowledge. If anyone can help get this windows GHA build to pass I think we are ready. FWIW this CI job is just to make sure the code modifications here are compatible with Windows, the actual windows build is delegated to the conda-forge feedstock.
Thanks @whophil for getting the Windows build working. This is ready @marcomangano @eirikurj . FYI we are still not testing any of this on numpy2 (though it works locally), I suppose the actual CI test will have to happen either
- when we update the MACH stack to numpy2 (any timeline there?)
- purely rely on conda-forge
2. purely rely on conda-forge
@ewu63 I would not rely on conda-forge for numpy2 testing. Typically the tests on conda-forge are for testing the package, not necessarily running the whole comprehensive test suite (although we are currently doing that with pyoptsparse-feedstock). Further, you can't run matrix tests (e.g,. numpy version) on conda-forge, so that should live in this project eventually.
@whophil makes sense. The current CI infrastructure mirrors that of the other MDOLab packages (i.e. using a shared set of Docker images with pre-defined dependency versions), and the rest of them are not numpy2 ready yet. Given the unique nature of pyOptSparse
- depends minimally on the rest of MACH tools (only baseclasses)
- does not need some specialized CI treatment (e.g. real and complex builds)
- has a wider user base than just MACH
- (for now) has a broader set of supported dep versions
maybe we should considering moving away (or more likely in parallel) to having its own CI infra purely in GHA. Something to think about @pyoptsparse-maintainers
Thanks everyone for your patience, v2.13.0 has now been tagged and released, and the conda package should follow shortly :tada: