sPyNNaker icon indicating copy to clipboard operation
sPyNNaker copied to clipboard

get_average_spike_rate could be more efficient

Open rowleya opened this issue 8 years ago • 10 comments

Current the average spike rate is obtained by first recording all spikes, then reading them back and finally working out the average rate. It would be more efficient to keep a count of all spikes on the core and then read back the counts from all cores to work out the averages. This would also mean that it could be done without recording the spikes in a population.

This could also be done for get_spike_counts, but this would require a count to be kept for each neuron (which might not be too bad), but it would still be quicker to read than recording and reading back all the spikes.

rowleya avatar Nov 16 '17 11:11 rowleya

other idea. have the cores themselves do the spike counts for their neurons. store as another data region / provenance region. have a sdp messgage asking for them.

for average spike rate (assuming its a one number) we can do the averaging at host from local averages. look at distributed averages from SNEE for an example how it works.

alan-stokes avatar Nov 16 '17 11:11 alan-stokes

You can do it by local averaging, but 1) this requires more work to be done on the core (including a division) and 2) it makes no difference at all to the storage required - you just sum up the spikes that occur on that core in total, not per neuron. The second case I mentioned as another possible saver, this time for get_spike_counts, but this isn't required.

rowleya avatar Nov 16 '17 11:11 rowleya

I would recommend counting per neuron. This is easier to implement and avoids the need to communicate between processors or even chips.

Christian-B avatar Nov 16 '17 11:11 Christian-B

the devision is at the end when its finished. so not fussed about that cost. And it stops you from having to record the spikes. If we just have it as a standard provenance data item thats read when required

alan-stokes avatar Nov 16 '17 11:11 alan-stokes

To do an average, we just need a sum either per neuron or per core, so no recording is needed. Keeping a sum per neuron helps with get_spike_counts, and doesn't increase the data enough to make it a problem in my opinion. I agree that having a region to store the final values is a good idea.

rowleya avatar Nov 16 '17 13:11 rowleya

Another good idea that will probably make it into a future release.

andrewgait avatar Jul 26 '19 17:07 andrewgait

could be done via another recording region.

alan-stokes avatar Sep 27 '19 10:09 alan-stokes

The classic way to do distributed average is using n_values and sum. The average is then the total(sum)/total(n_values)

In our case we can compute n_values as we know how many timesteps and if applicable how many neurons.

Therefor all we need to implement in C is a count the spikes function.

This gives us a useful total spikes and for nearly free (python side divisions) average as well.

-- A step farther is to have counts per recording interval (Time window in the streaming lingo)

Christian-B avatar Sep 30 '19 08:09 Christian-B

"In our case we can compute n_values as we know how many timesteps and if applicable how many neurons." that's only true on models which don't spikes more than once per timestep per neuron. i remind you of the input models, which break this assumption.

alan-stokes avatar Sep 30 '19 08:09 alan-stokes

On models that spike more than once per timestep n_values is still the number of timesteps!

For example say something had the spikes 3, 0 1, 4, 2, 1, 3
n_values (count in snee world) = 7 (7 timesteps in spinnaker) Spike count (sum in the snee world) = 14 So the average spike rate = 14/7 = 2 (sum/count in the snee world)

Christian-B avatar Sep 30 '19 11:09 Christian-B