nrn
nrn copied to clipboard
model/test for performance showcase as part of binary distribution
Overview
For the upcoming release with frontiers paper, we will have neuron+coreneuron+nmodl wheel/binary installer with GPU support. It will be great if we could have some inbuilt model that can be run by end user to understand performance improvements. For example, the workflow could be:
pip install neuron # install latest neuron wheel containing coreneuron+nmodl with GPU support
- start python
- import cell type X # mod files are already compiled, like neurondemo
- instantiate N number of cells
- some helper function for creating network (?)
- run simulation with NEURON
- enable coreneuron mode with GPU
- select Y number of GPUs
- run simulation again
- way to plot results or voltage traces
- way to compare performance
Desirable Characteristics / Features
- model should be part of binary distribution (i.e. already compiled MOD files like neurondemo)
- necessary input (e.g. morphology file) should be included
- should be runnable via neuron or coreneuron (with / without gpu)
- sufficiently complex. / large for GPU execution
- easy to scale / increase number of cells (?)
- taken fro an interesting scientific use case (?)
- should be easy to demonstrate via 10-20 lines of python code (implementation details could be hidden in some helper classes)
- overall, this all shouldn't be too much complex :)
@nrnhines @ramcdougal @adamjhn: you have better idea about what could be interesting for users. An example or ideas would be very helpful here.
we do have ringtest but that's not ideal considering above desirable characteristics
The ringtest can be expanded with a network complexity option to give it interesting network connectivity. Though that does not entirely satisfy your last two points.
This is a good topic for discussion at the developer meeting.
I was motivated by @ramcdougal's google collab example from CNS 2021 course. May be @ramcdougal has more ideas to create a computationally expensive model?
Me and Michael had a brief discussion. Another possibility we discussed was:
- Use existing model(s) that we are using for benchmarking purpose: traub, olfactory bulb, reduced_dentate, channel_benchmark
- Compile mod files and package them under wheel (much like neurondemo)
- Create a python module called
showcase
orbenchmark
that can be used with API like below (just for demonstration purpose):
from neuron import h
from neuron import coreneuron
from neuron import showcase
model=showcase.select("traub") # load necessary library and download scripts if any
model.ncells(X)
mode.some_other_parameter_of_model(Y)
# initialise and run with NEURON
v_nrn = h.Vector()
v_nrn.record(...._ref_v, sec=...)
stdinit()
model.run(tstop=100)
# enable coreneuron, run with CoreNEURON
coreneuron.enable(True)
v_cnrn = h.Vector()
v_cnrn.record(...._ref_v, sec=...)
stdinit()
model.run(tstop=100)
# enable GPU support and run again with CoreNEURON
coreneuron.gpu(True)
v_cnrn_gpu = h.Vector()
v_cnrn_gpu.record(...._ref_v, sec=...)
stdinit()
model.run(tstop=100)
# plot the trajectories and compare timings here
Some additional comments:
- We do not want to "package everything" inside the wheel due to size limitation.
- Mod files could be packaged as a shared library
- Extra input data and support files could be downloaded on the fly on
model=showcase.select("traub")
-
showcase
module is a convenient python wrapper around existing model- Have to see what could be API considering the various parameters exposed by model
Although conceptually perhaps not the best choice, it is clearly technically simplest to have the "showcase" shared library installed via the neuron wheel. This library would be the union of all the mod files used by the showcase models. This is very similar to what is done for neurondemo. An alternative is to have a zip file for each showcase model and arch that contains the arch specific shared library (at least the shared library is not python version dependent).
I like the overall approach.
A few thoughts:
- Model data could be (should be?) hosted on zenodo, so it can be easily fetched and we're sure it'll be persistent for a long time
- The user parameters of a model come along with the model and showcase takes care of exposing them via a
params
dataclass - I think in the future nmodl JIT will be the way to go for showcase models, but right now it probably makes sense to build one .so with the union of all needed mods and ship it along the wheel.
With colab or models that are in a file, you can always just call nrnivmodl
(works on all platforms) before importing neuron
.
@ramcdougal : the reason we are thinking of shipping pre-compiled shared library is to avoid the need of full development environment esp. NVIDIA SDK to support GPU execution. With the above described approach, we can easily run models and they can also serve as a quick/easy benchmarks on different CPUs/GPUs.