FLAMEGPU2 icon indicating copy to clipboard operation
FLAMEGPU2 copied to clipboard

Performance Regression Testing

Open ptheywood opened this issue 4 years ago • 2 comments

We should introduce a nice convenient way of tracing the performance of FLAME GPU as commits progress.

Both for the examples suite (although changing behaviours would lead to loss of performance potentially) but also for a suite model performance tests with the express purpose of having performance recorded.

Effectively a secondary test suite which records performance for a given commit. Each model should be executed several times, and the average performance recorded (+ the hardware executed on, compiler version(s) and driver version(s))

This may involve a database of results? Possibly use of an external tool to switch

ptheywood avatar May 19 '20 09:05 ptheywood

AiiDA is an interesting external tool, also relevant for parameter sweeping, ensembles, checkpoints etc:

Cite as: arXiv:2003.12476 [cs.DC] (or arXiv:2003.12476v1 [cs.DC] for this version) [v1] Tue, 24 Mar 2020 12:06:12 UTC (363 KB)

https://aiida.readthedocs.io/projects/aiida-core/en/latest/

AiiDA: automated interactive infrastructure and database for computational science

https://doi.org/10.1016/j.commatsci.2015.09.013

Works with Slurm et al:

https://slurm.schedmd.com/

HPC interface: Move your calculations to a different computer by changing one line of code. AiiDA is compatible with schedulers like SLURM, PBS Pro, torque, SGE or LSF out of the box.

Plugin interface: Extend AiiDA with plugins for new simulation codes (input generation & parsing), data types, schedulers, transport modes and more.

When replacing current XML initialization and output, plugin for AiiDA should be worth considering.

dentarthur avatar Jun 01 '20 05:06 dentarthur

pytest-benchmark might be relevant if we want to add a similar thing to the python interface

https://pypi.org/project/pytest-benchmark/

ptheywood avatar Aug 24 '21 10:08 ptheywood