sPyNNaker
sPyNNaker copied to clipboard
No checking for round to zero or too big for int16 when scaling STDP parameters
Hi this is an issue that has bitten me a few times in the past so I suggest a fix be added to master.
https://github.com/SpiNNakerManchester/sPyNNaker/blob/ac1d2d2d3eeaab17c1b261ad4bae0d8b11a59bc8/spynnaker/pyNN/models/neuron/plasticity/stdp/weight_dependence/weight_dependence_additive.py#L49-L69
for certain large weight scales (w) and chosen STDP parameters we can produce data spec parameter values that are either rounded to 0 or overflow the range of an int16. Without any warning...
This is a problem as it can lead to very strange simulation results without any reasonable explanations to the PyNN user.
I have implemented a potential fix for an additive weight dependence in:
https://github.com/SpiNNakerManchester/sPyNNaker/blob/644afd8ee7d422541a2920ce3bb0d25777dabd47/spynnaker/pyNN/models/neuron/plasticity/stdp/weight_dependence/weight_dependence_additive.py#L49-L91
Is this still an issue?
if yes do you have a PR for your fix?
if no pr do you have a sample script to replicate the error.
To my knowledge no PR was created for this. A simple way to reproduce the error is have plastic projections between large populations with fairly small STDP alpha/tau constants. When the populations are large enough (and the constants are small enough) the auto weight scaling can push scaled values outside of the 16bit integer limit or round to 0 without the end user knowing.
Looking at this a bit further, the overflow issue is known about already. I'm not sure anything can be done about it on the Python side that straightforwardly, as you may be making an assumption that the synaptic weights will definitely be scaled out of the 16-bit integer limit. (Note that the numbers being passed in here are and (probably) should be 32-bit integers; the scaling itself only takes place within the C code).
Having said that, within the C code this is a known issue: see the note beginning at https://github.com/SpiNNakerManchester/sPyNNaker/blob/da9645e9505bdbab8f7d22f84dd70fc1d228b016/neural_modelling/src/neuron/plasticity/stdp/synapse_dynamics_stdp_mad_impl.c#L354. I'm not aware that we've come to any conclusions or resolutions on that front.
I would be inclined (possibly in both cases of rounding to zero and rounding to "large" - though we need to decide what "large" means here, because uint16 max probably isn't it) to throw warnings rather than errors if either of these cases is detected. But I'm willing to hear other arguments to the contrary.
I think I'm right in saying that this issue was fixed by the STDP refactor e.g. in https://github.com/SpiNNakerManchester/sPyNNaker/pull/1120 but if it needs reopening then that's fine too.