AMICI
AMICI copied to clipboard
Add sllh computation back to `petab_objective.simulate_petab`
Was removed in #1498 due to erroneous implementation.
- parameters are now scaled within
simulate_petab
- added FD checks, essentially copied from
pypesto.objective.ObjectiveBase.check_grad{_multi_eps}
[1] - added a test case: a PEtab problem based on the Lotka-Volterra example in
yaml2sbml
[2], extended to test more of thesimulate_petab
SLLH computation with- multiple experimental conditions
- preequilibration
[1] https://github.com/ICB-DCM/pyPESTO/blob/32a5daf888dd6c49fa3ca3b8b39e055b6f15ca9f/pypesto/objective/base.py#L397-L620 [2] https://github.com/yaml2sbml-dev/yaml2sbml/tree/773f4c563d90689cd974da8c114b6c7eaf316802/doc/examples/Lotka_Volterra/Lotka_Volterra_python
Codecov Report
Merging #1548 (785c403) into develop (43ae5b7) will decrease coverage by
4.62%
. The diff coverage is0.00%
.
@@ Coverage Diff @@
## develop #1548 +/- ##
===========================================
- Coverage 76.13% 71.50% -4.63%
===========================================
Files 74 71 -3
Lines 12673 12706 +33
===========================================
- Hits 9648 9085 -563
- Misses 3025 3621 +596
Flag | Coverage Δ | |
---|---|---|
cpp | 72.78% <ø> (+0.01%) |
:arrow_up: |
petab | ? |
|
python | 66.81% <0.00%> (-1.55%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files | Coverage Δ | |
---|---|---|
python/amici/petab_objective.py | 0.00% <0.00%> (-93.50%) |
:arrow_down: |
python/amici/petab_simulate.py | 0.00% <0.00%> (-100.00%) |
:arrow_down: |
python/amici/petab_import_pysb.py | 0.00% <0.00%> (-92.46%) |
:arrow_down: |
python/amici/petab_import.py | 0.00% <0.00%> (-63.72%) |
:arrow_down: |
python/amici/parameter_mapping.py | 40.15% <0.00%> (-37.13%) |
:arrow_down: |
python/amici/setuptools.py | 50.96% <0.00%> (-6.74%) |
:arrow_down: |
src/exception.cpp | 75.67% <0.00%> (-5.41%) |
:arrow_down: |
python/amici/custom_commands.py | 34.37% <0.00%> (-3.13%) |
:arrow_down: |
python/amici/pysb_import.py | 91.84% <0.00%> (-2.72%) |
:arrow_down: |
... and 7 more |
LGTM, but please wait for others' feedback.
Why was the extra test model required? Would keep things simpler if we could use benchmark collection model that's already used elsewhere.
The extra test model is because a suitable benchmark problem may not exist. I tested a few with pyPESTO and some errors were always high. I could look for more (I may have also been using an optimized vector).
The extra test model might still be a good idea, since many/all PEtab features could be added to it that do not all exist in a single benchmark problem simultaneously.
Not sure whether the implementation here is correct. Might also explain why errors are high for petab test models. I agree with Daniel that we should ideally get this working on all of the examples from the benchmark collection as well.
Sure, then I'll compare the gradients for all benchmark problems between AMICI and pyPESTO. Might take a week or so due to other commitments.