Unified entry point for models and parameter sets
Description
Modified the existing parameter_sets API and unified entry points to accommodate model entry points. This is an extension of the existing model entry points in the pybamm-cookie project. Relevant discussion.
Fixes # (issue)
Type of change
Please add a line in the relevant section of CHANGELOG.md to document the change (include PR #) - note reverse order of PR #s. If necessary, also add to the list of breaking changes.
- [x] New feature (non-breaking change which adds functionality)
- [ ] Optimization (back-end change that speeds up the code)
- [ ] Bug fix (non-breaking change which fixes an issue)
Key checklist:
- [x] No style issues:
$ pre-commit run(or$ nox -s pre-commit) (see CONTRIBUTING.md for how to set this up to run automatically when committing locally, in just two lines of code) - [x] All tests pass:
$ python run-tests.py --all(or$ nox -s tests) - [ ] The documentation builds:
$ python run-tests.py --doctest(or$ nox -s doctests)
You can run integration tests, unit tests, and doctests together at once, using $ python run-tests.py --quick (or $ nox -s quick).
Further checks:
- [ ] Code is commented, particularly in hard-to-understand areas
- [ ] Tests added that prove fix is effective or that feature works
Codecov Report
:white_check_mark: All modified and coverable lines are covered by tests.
:white_check_mark: Project coverage is 99.10%. Comparing base (56450f8) to head (6b05386).
:warning: Report is 176 commits behind head on develop.
Additional details and impacted files
@@ Coverage Diff @@
## develop #4490 +/- ##
========================================
Coverage 99.10% 99.10%
========================================
Files 306 307 +1
Lines 23671 23686 +15
========================================
+ Hits 23458 23473 +15
Misses 213 213
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
@santacodes Are you still working on this?
@santacodes Are you still working on this?
Yes I am, it has yet to be reviewed, I will update this branch and request a review again.
Resolving merge conflicts on my branch was causing a lot of problems and it was staging a lot of files that weren't modified by me during the merge for some reason, anyway it was a small change so I manually wrote a patch file and applied the patch after resetting this branch so the history is gone. This PR still needs a review to get it merged, and this PR is directly related to - https://github.com/pybamm-team/pybamm.org/pull/258 so it might be good if we have a base entry points API before the GSoC timeline.
Bumping this @valentinsulzer @agriyakhetarpal @arjxn-py as I cannot request a re-review
This implementation looks good to me! My only comment is that the user should be able to pass args and kwargs to the model i.e.
pybamm.Model("SPM", options)returns
pybamm.lithium_ion.SPM(options)But let's keep it as a general
*args, **kwargsso that developers can enable other arguments beyond justoptionsif they want
Added this in the latest commit, works too! :)
I have missed a couple of GSoC meetings, but will we be encouraging users to use the new API (deprecate the old model API/just encourage and not deprecate to keep backward compatibility)?
If we want users to use the new API, we should update our examples (which @medha-14 should be able to pick up in a new PR).
Hi @Saransh-cpp, filling you in: I think this is not the idea right now, as we would like to keep both public APIs as they will be used differently. The idea is that models added and validated in the PyBaMM repository will be added to pybamm.lithium_ion, pybamm.lead_acid and other namespaces, and the pybamm.Model() API is for generic/third-party models that users can load through entry points. We will be sticking to the same API for the in-house models for now. Some follow-ups here for @medha-14 will be to:
-
create an example package using pybamm-cookie, preferably in the PyBaMM GitHub organisation, with an external model (simpler ones such as the reservoir model, or the heat equation, or so on) and an external parameter set to go along with it, both exposed as entry points
-
write an example notebook for the PyBaMM repo, link to it in the third-party parameters page (see also #4406), which installs that example package and teaches users how to use this API
-
update the pybamm-cookie repository to use these
pybamm.Model()APIs when this PR is included in the next PyBaMM release, and coordinate with @santacodes to create a new release for pybamm-cookie when done -
In the meantime, the more extended discussion on how to better distribute models (i.e., as an alternative to installable Python packages that expose entry points for the PyBaMM models group): whether through Zarr stores, object storage (S3 or elsewhere), as JSON + exposed via a HTTP REST API, etc., can continue through the rest of the GSoC timeline. There is an attempt in #5056 to serialise non-discretised PyBaMM models, if I recall correctly.
Thanks for the detailed explanation, @agriyakhetarpal! Everything is very clear now.
Slightly deviating from my previous response in https://github.com/pybamm-team/PyBaMM/pull/4490#issuecomment-2975814188 here:
This PR is going to be merged soon enough as it is nearing its completion, but it would be nicer if we could find out any underlying issues with the implementation and iterate faster. I propose that we should not wait until a PyBaMM release with this to test changes downstream, and we should start a PR in the https://github.com/pybamm-team/pybamm-cookie repository (which can be kept as draft until a PyBaMM release comes out) which installs PyBaMM from this branch from @santacodes's fork and uses these APIs to instantiate external models and parameter sets. It would give us a chance to test out these changes and see if there's anything else we require in this PR or as a follow-up to it.
@medha-14, do you think you could get started on this?
Slightly deviating from my previous response in #4490 (comment) here:
This PR is going to be merged soon enough as it is nearing its completion, but it would be nicer if we could find out any underlying issues with the implementation and iterate faster. I propose that we should not wait until a PyBaMM release with this to test changes downstream, and we should start a PR in the https://github.com/pybamm-team/pybamm-cookie repository (which can be kept as draft until a PyBaMM release comes out) which installs PyBaMM from this branch from @santacodes's fork and uses these APIs to instantiate external models and parameter sets. It would give us a chance to test out these changes and see if there's anything else we require in this PR or as a follow-up to it.
@medha-14, do you think you could get started on this?
Sure, I'll get started on it right away.
Forgot to remove parameter_sets.py so removed that in the latest commit because it is not getting utilized anywhere after the new entry points mechanism.
Short comment for now: won't removing it introduce a breaking change, @santacodes? We should still expose pybamm.parameter_sets until the next few releases.
Short comment for now: won't removing it introduce a breaking change, @santacodes? We should still expose
pybamm.parameter_setsuntil the next few releases.
@agriyakhetarpal I don't think it is a breaking change because only the backend mechanism for accessing parameter_sets and models has changed and pybamm.parameter_sets can still be accessed and works the same way with the new entry points. I removed the file because there is no purpose of keeping it as it is not getting called or used anywhere in the code base, as parameter_sets namespace is loaded through entry_points.py and not through parameter_sets.py. In the case if it changed anything, it would have also failed a lot of unit tests, which is not the case.
Still shows "Merging is blocked" :(
I think it needs an approval from @kratman and @valentinsulzer as they have requested changes above.