Benchmark-Models-PEtab icon indicating copy to clipboard operation
Benchmark-Models-PEtab copied to clipboard

Roadmap for the next PEtab version

Open dilpath opened this issue 1 year ago • 1 comments

  • [ ] move repo to PEtab-dev org
  • [ ] rename repo benchmark-problems (i.e. PEtab-dev/benchmark-problems)
  • [ ] rename Benchmark-Models directory to problem (i.e. PEtab-dev/benchmark-problems/problems)
  • [ ] add metadata to each problem in its PEtab YAML file (closes #25)
    • [ ] create schema for such metadata
    • [ ] include objective function value evaluated at nominal parameters (possibly closes #17 unless that it about curation itself). also optionally the Chi2.
    • [ ] include DOI to original publication
    • [ ] copy all relevant metadata from model to PEtab YAML, including e.g. model ID (closes #155)
  • [ ] create script to automatically extract interesting problem "features" then produce a "problem" vs. "feature" matrix TSV file for convenience (closes #18 #19). example features (maybe difficult to extract): "splines", "log-scale observables", "parameter-dependent initial conditions". Include all features currently in README overview table.
  • [ ] add check for correctly-formatted simulation tables (closes #20)
  • [ ] update all PEtab files (e.g. PEtab versions) (closes #1)
  • [ ] update contribution guide (closes #32)
  • [ ] auto-update CITATION.cff to match Zenodo on new "releases" (closes #164)
  • [x] consolidate scripts and Python package (#169)
  • [ ] require(?) PEtab Results with each problem to ensure "exact"/"easy" reproducibility
  • [ ] add "Support" column to overview table, that links to each model's respective issue, where users can request model-specific support

dilpath avatar Apr 18 '24 12:04 dilpath

  • [ ] include objective function value evaluated at nominal parameters

That would be super helpful. Also the gradients. That will make this collection more relevant for regular testing, not just benchmarking.

dweindl avatar Sep 25 '24 07:09 dweindl