pyomo
pyomo copied to clipboard
Extended Parmest Capability for weighted SSE objective
Fixes # .
Summary/Motivation:
Currently, the Parmest SSE objective does not support measurements in different units. This work adds a new capability (i.e., weighted SSE) to Parmest to handle measurements in different units.
Changes proposed in this PR:
- Added a new weighted SSE calculation
- Added a new covariance matrix calculation for the weighted SSE objective
Legal Acknowledgement
By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:
- I agree my contributions are submitted under the BSD license.
- I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
@adowling2 @djlaky
@slilonfe5 Here is some quick feedback
compute_jacobian function
- [ ] Make this a private method by adding
_to the function name - [ ] Add as an argument to the function
relative_perturbation - [ ] In the document string, explain this is using forward (?) finite difference
- [ ] Add as an argument the solver object. You can make the default Ipopt.
Feedback on the compute_FIM method:
- [ ] Add
relative_toleranceandsolveras arguments - [ ] Also add a check that
error_listmust be the same length asy_hat_list - [ ] Add a debugging step for the linear algebra error, compute the condition number of the Jacobian matrix and print it out
- [ ] Why would you ever get a linear algebra error for just matrix multiplication? Is this check even needed?
@adowling2 @djlaky I also updated the calculation for the normal SSE such that we can use the user-supplied measurement error if defined; otherwise, we calculate the measurement error as usual.
@slilonfe5 Once you have the tests ready, tag us for feedback. Also, I think you can skip adding this to the depreciated class.
@adowling2 @djlaky I have created a separate method (cov_est) for computing the covariance matrix, supporting three calculation methods (jacobian, kaug, and reduced_hessian). I implemented these covariance calculation methods for both the SSE and SSE_weighted objectives. Lastly, as you suggested, I did not add the new capability to the deprecated interface.
I tested these with three examples (2 steady state and 1 dynamic), and all work well. I'm yet to write the test file for these.
@adowling2 @djlaky I have completed the comments from Dan.
@adowling2 @djlaky @jsiirola @blnicho @mrmundt The new capabilities we are introducing in the parmest.py file are ready for review. The test file (test_new_parmest_capabilities.py) is almost ready, I have questions during today's meeting.
I did not review the test file at all yet - I wanted to at least give you all initial feedback to look over.
@mrmundt Thank you for leaving some starter comments. @slilonfe5 and I spoke today about moving the new tests back into the original file and removing the 'new_capabilities' testing file. But your overall comments are helpful!
@adowling2 @djlaky @mrmundt @blnicho @jsiirola
I have implemented the comments on the parmest.py and test_parmest.py files. I moved the test from the previous test_new_parmest_capabilities.py file to the test_parmest.py file. In the test_parmest.py file, the Rooney-Biegler paper example is used to test the 'SSE' and 'SSE_weighted' objectives for all covariance calculation methods ('reduced_hessian', 'finite_difference', and 'automatic_differentiation_kaug'), as well as the two cases of measurement errors (user-supplied and not supplied by the user). To keep the test file short and clean, I didn't consider the reactor design example. Also, I removed the test for the deprecated interface since this will eventually be removed. The above tests passed without issues.
However, I encountered an issue with the test for other functions, such as bootstrap and likelihood ratio. I plan to discuss it today at the Pyomo Dev meeting.
Thanks.
@adowling2 @djlaky @mrmundt @blnicho @jsiirola Like I mentioned yesterday, this PR is ready for final review. All the tests have passed. Also, based on yesterday's meeting, I will create a new PR to fix the bugs in the current parmest.py file and the examples that fail when running the test_examples.py and test_scenariocreator.py files. The new PR will have to go through before this PR goes through.
After creating the new PR, please post a link here.
@adowling2 @djlaky @mrmundt @blnicho @jsiirola Here is the link to the new PR on the bug fixes in the experiment_outputs and unknown_parameters checks in the current parmest.py file, and failing examples in the test_examples.py and test_scenariocreator.py files. https://github.com/Pyomo/pyomo/pull/3635
@mrmundt Thank you for your feedback. I will work on them to make the code cleaner.
@mrmundt @blnicho @jsiirola @adowling2 @djlaky I have implemented Miranda's final review on this PR. I will work on getting this moved to "Done" in the Parmest & Pyomo.DoE Development board. Thanks
@slilonfe5 I took a look at the failing doctests and I think you just need to update some code snippets in the parmest documentation to match the API changes in your PR. This is the file and one of the doctests that's failing: https://github.com/Pyomo/pyomo/blame/10d6acc37cca0cf8bc1a5a9734516fc7fd70c989/doc/OnlineDocs/explanation/analysis/parmest/driver.rst#L86
Take a look at all the code snippets in that file and make sure they are using the updated API and won't throw deprecation warnings.
@blnicho @mrmundt @jsiirola @adowling2 @djlaky I have implemented the feedback from Bethany regarding updating the documentation files (e.g., driver.rst) to match the API changes in this PR. All the documentation tests now pass. I need a final review of the updates I made to the documentation files; driver.rst, datarec.rst, and covariance.rst. Thank you.
@blnicho @mrmundt @jsiirola A kind reminder that I need a final review of the updates I made to the documentation files. Thank you
@blnicho @mrmundt @jsiirola @adowling2 @djlaky I have implemented Miranda's final review on this PR. I think this can be moved to Reviewer Approved in the August 2025 Release dashboard.
@mrmundt @blnicho Just a kind reminder that this PR is waiting for Reviewer Approval. Thanks
@blnicho Just a kind reminder that I'm waiting for your pending comments (the ones you mentioned during Tuesday's meeting). Thanks
@blnicho @mrmundt @jsiirola @djlaky @adowling2 I have implemented Bethany's final review.
@blnicho Thanks for your comments. I think these are pretty straightforward to implement. I will get these done by tomorrow.
@blnicho @mrmundt @jsiirola @adowling2 @djlaky I have implemented Bethany's final comments on this PR. I think this is ready to be merged.
@blnicho @mrmundt Just a kind reminder that we need this PR to be merged ahead of the PSE Summit next week.
@mrmundt, I think we may need to re-run the one test that failed on the Python packages installation
Codecov Report
:x: Patch coverage is 91.33574% with 24 lines in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 89.19%. Comparing base (0a5479f) to head (a2d4318).
| Files with missing lines | Patch % | Lines |
|---|---|---|
| pyomo/contrib/parmest/parmest.py | 91.04% | 24 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #3535 +/- ##
========================================
Coverage 89.19% 89.19%
========================================
Files 892 892
Lines 103101 103329 +228
========================================
+ Hits 91957 92165 +208
- Misses 11144 11164 +20
| Flag | Coverage Δ | |
|---|---|---|
| builders | 26.69% <7.58%> (-0.03%) |
:arrow_down: |
| default | 85.80% <91.33%> (?) |
|
| expensive | 34.05% <24.18%> (?) |
|
| linux | 86.96% <91.27%> (-1.96%) |
:arrow_down: |
| linux_other | 86.96% <91.27%> (+<0.01%) |
:arrow_up: |
| osx | 83.31% <91.27%> (+0.01%) |
:arrow_up: |
| win | 85.18% <91.27%> (+0.01%) |
:arrow_up: |
| win_other | 85.18% <91.27%> (+0.01%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
@mrmundt @jsiirola I think everything looks good
@mrmundt @jsiirola I’m not on my laptop right now, but the tests are failing because of the if statements in theta_est(). In theta_est(), calc_cov default is NOTSET. In the if and elif statement, we may want to use calc_cov is True and calc_cov is not True, respectively. When we use calc_cov and not calc_cov, this will fail in the case when calc_cov=NOTSET (i.e., user does not supply calc_cov). This error will come from the deprecated interface
Alternatively, the if and elif statements can be put inside the first if statement that checks if calc_cov is not NOTSET