sPyNNaker
sPyNNaker copied to clipboard
get_average_spike_rate could be more efficient
Current the average spike rate is obtained by first recording all spikes, then reading them back and finally working out the average rate. It would be more efficient to keep a count of all spikes on the core and then read back the counts from all cores to work out the averages. This would also mean that it could be done without recording the spikes in a population.
This could also be done for get_spike_counts, but this would require a count to be kept for each neuron (which might not be too bad), but it would still be quicker to read than recording and reading back all the spikes.
other idea. have the cores themselves do the spike counts for their neurons. store as another data region / provenance region. have a sdp messgage asking for them.
for average spike rate (assuming its a one number) we can do the averaging at host from local averages. look at distributed averages from SNEE for an example how it works.
You can do it by local averaging, but 1) this requires more work to be done on the core (including a division) and 2) it makes no difference at all to the storage required - you just sum up the spikes that occur on that core in total, not per neuron. The second case I mentioned as another possible saver, this time for get_spike_counts, but this isn't required.
I would recommend counting per neuron. This is easier to implement and avoids the need to communicate between processors or even chips.
the devision is at the end when its finished. so not fussed about that cost. And it stops you from having to record the spikes. If we just have it as a standard provenance data item thats read when required
To do an average, we just need a sum either per neuron or per core, so no recording is needed. Keeping a sum per neuron helps with get_spike_counts, and doesn't increase the data enough to make it a problem in my opinion. I agree that having a region to store the final values is a good idea.
Another good idea that will probably make it into a future release.
could be done via another recording region.
The classic way to do distributed average is using n_values and sum. The average is then the total(sum)/total(n_values)
In our case we can compute n_values as we know how many timesteps and if applicable how many neurons.
Therefor all we need to implement in C is a count the spikes function.
This gives us a useful total spikes and for nearly free (python side divisions) average as well.
-- A step farther is to have counts per recording interval (Time window in the streaming lingo)
"In our case we can compute n_values as we know how many timesteps and if applicable how many neurons." that's only true on models which don't spikes more than once per timestep per neuron. i remind you of the input models, which break this assumption.
On models that spike more than once per timestep n_values is still the number of timesteps!
For example say something had the spikes
3, 0 1, 4, 2, 1, 3
n_values (count in snee world) = 7 (7 timesteps in spinnaker)
Spike count (sum in the snee world) = 14
So the average spike rate = 14/7 = 2 (sum/count in the snee world)