Benchmark-Models-PEtab
Benchmark-Models-PEtab copied to clipboard
Roadmap for the next PEtab version
- [ ] move repo to
PEtab-devorg - [ ] rename repo
benchmark-problems(i.e.PEtab-dev/benchmark-problems) - [ ] rename
Benchmark-Modelsdirectory toproblem(i.e.PEtab-dev/benchmark-problems/problems) - [ ] add metadata to each problem in its PEtab YAML file (closes #25)
- [ ] create schema for such metadata
- [ ] include objective function value evaluated at nominal parameters (possibly closes #17 unless that it about curation itself). also optionally the Chi2.
- [ ] include DOI to original publication
- [ ] copy all relevant metadata from model to PEtab YAML, including e.g. model ID (closes #155)
- [ ] create script to automatically extract interesting problem "features" then produce a "problem" vs. "feature" matrix TSV file for convenience (closes #18 #19). example features (maybe difficult to extract): "splines", "log-scale observables", "parameter-dependent initial conditions". Include all features currently in README overview table.
- [ ] add check for correctly-formatted simulation tables (closes #20)
- [ ] update all PEtab files (e.g. PEtab versions) (closes #1)
- [ ] update contribution guide (closes #32)
- [ ] auto-update CITATION.cff to match Zenodo on new "releases" (closes #164)
- [x] consolidate scripts and Python package (#169)
- [ ] require(?) PEtab Results with each problem to ensure "exact"/"easy" reproducibility
- [ ] add "Support" column to overview table, that links to each model's respective issue, where users can request model-specific support
- [ ] include objective function value evaluated at nominal parameters
That would be super helpful. Also the gradients. That will make this collection more relevant for regular testing, not just benchmarking.