brian2
brian2 copied to clipboard
different synaptical behaviour with/without `prefs.codegen.target = 'numpy'`
Hello!
I wanted to understand how the synapses work and which data is available when, thus I build the following debug dumping code:
This code does nothing else then dumping what i
, j
, and additional variables your neurons/synapses might have. This data is dumped on different synaptic pathways:
on_pre
(with delay) and on_pre_without_delay
and on_post
(both without delay).
I've also attached a jupyter notebook for your convenience: jupyter_notebook.zip
from brian2 import *
import numpy as np
prefs.codegen.target = 'numpy'
%matplotlib inline
defaultclock.dt = 0.005*ms
seed(2)
N=5
leak_rate = 0.1
threshold = 1.0
v_rest = 0.5
delay = '(rand()*0.5 + 0.5)*ms'
refractory = '1*ms'
################################################################################
# Neuron Dynamics
# standard leaky integrate and fire (with refractory, without adaption)
# one additional variable which is only there for debug purposes
################################################################################
eqs = Equations('''
dv/dt = (-(v - v_rest) + I) / tau_m : 1 (unless refractory)
I : 1
tau_m : second
additional_variable_neuron : 1
''')
# now define the three neuron populations: input, reservoir, output
G = NeuronGroup(N, eqs,
threshold='v > threshold',
reset='v = v_rest',
refractory=refractory,
method ='exact')
G.tau_m = '(rand()*9+1)*ms'
G.additional_variable_neuron = 'rand()'
################################################################################
# Synapses:
# sparse with 40%
# no self refering synapses (i!=j)
# one additional synapse pathway without delay for debugging purposes
################################################################################
# Reservoir -> Reservoir
S = Synapses(G, G,
model='''
additional_variable_synapse : 1
''',
on_pre={
'pre' : '''
test = debug_dump(1, i, j, additional_variable_synapse, additional_variable_neuron_pre, additional_variable_neuron_post)
''',
'pre_without_delay' : '''
test = debug_dump(2, i, j, additional_variable_synapse, additional_variable_neuron_pre, additional_variable_neuron_post)
'''
},
on_post='''
test = debug_dump(3, i, j, additional_variable_synapse, additional_variable_neuron_pre, additional_variable_neuron_post)
'''
)
S.connect(condition='i!=j', p=0.4)
S.additional_variable_synapse = 'rand()<0.75'
S.pre.delay = delay
S.pre_without_delay.delay = 0
S.post.delay = 0
################################################################################
# Debug functions which show what's going on in the synapse
################################################################################
# debugging functions
@implementation('numpy', discard_units=True)
@check_units(pathway=1, i=1, j=1, additional_variable_synapse=1,
additional_variable_neuron_pre=1, additional_variable_neuron_post=1, result=1)
def debug_dump(pathway, i, j, additional_variable_synapse,
additional_variable_neuron_pre, additional_variable_neuron_post):
# passing strings in brian2-equation-strings is either weird or doesn't work at all, thus:
if pathway == 1:
pathway_str = 'on_pre, with delay'
elif pathway == 2:
pathway_str = 'on_pre, without delay'
else:
pathway_str = 'on_post, without delay'
print('''Dumping all values occuring on this call of {}:
i = {}
j = {}
additional_variable_synapse = {}
additional_variable_neuron_pre = {}
additional_variable_neuron_post = {}
'''.format(pathway_str, i, j, additional_variable_synapse,
additional_variable_neuron_pre, additional_variable_neuron_post))
return 1
################################################################################
# Monitors
################################################################################
spikemonitor = SpikeMonitor(G)
################################################################################
# Other
################################################################################
# setting initial state
G.v = 'rand() * 0.5 + 0.5'
G.I = 'rand() * 2'
# network definition
net = Network([G, S, spikemonitor])
# run statement
net.run(3*ms, report='text')
This results to output like this:
Starting simulation at t=0. s for a duration of 3. ms
Dumping all values occuring on this call of on_pre, without delay:
i = [1]
j = [4]
additional_variable_synapse = [1.]
additional_variable_neuron_pre = [0.20464863]
additional_variable_neuron_post = [0.26682728]
Dumping all values occuring on this call of on_post, without delay:
i = [2 3 4]
j = [1 1 1]
additional_variable_synapse = [1. 1. 1.]
additional_variable_neuron_pre = [0.61927097 0.29965467 0.26682728]
additional_variable_neuron_post = [0.20464863 0.20464863 0.20464863]
Dumping all values occuring on this call of on_pre, without delay:
i = [4]
j = [1]
additional_variable_synapse = [1.]
additional_variable_neuron_pre = [0.26682728]
additional_variable_neuron_post = [0.20464863]
Dumping all values occuring on this call of on_post, without delay:
i = [0 1 2 3]
j = [4 4 4 4]
additional_variable_synapse = [0. 1. 0. 1.]
additional_variable_neuron_pre = [0.33033482 0.20464863 0.61927097 0.29965467]
additional_variable_neuron_post = [0.26682728 0.26682728 0.26682728 0.26682728]
Dumping all values occuring on this call of on_pre, with delay:
i = [1]
j = [4]
additional_variable_synapse = [1.]
additional_variable_neuron_pre = [0.20464863]
additional_variable_neuron_post = [0.26682728]
Dumping all values occuring on this call of on_pre, with delay:
i = [4]
j = [1]
additional_variable_synapse = [1.]
additional_variable_neuron_pre = [0.26682728]
additional_variable_neuron_post = [0.20464863]
And so on and so on ....
Note that i
and j
and all the other variables come in arrays of length 1 or longer. Synaptic events which happen in exactly the same point of time are bundled in a vector, as they should!
Analysis of bigger networks or with dt = 1*ms
shows that this holds also true when different neurons spike at the same time.
So far, so good! Very enlightening and I think one hot candidate to be added to the example-section of the documentation. Weren't there is fig fucking
BUT!
Why did I file a bug?
If you comment out the prefs.codegen.target = 'numpy'
or change it to cython
or auto
, the behaviour changes drastically, instead it results in the following (if using the jupyter notebook to reproduce the bug, be sure to restart the kernel!):
Dumping all values occuring on this call of on_pre, without delay:
i = 4
j = 1
additional_variable_synapse = 1.0
additional_variable_neuron_pre = 0.26682727510286663
additional_variable_neuron_post = 0.2046486340378425
Dumping all values occuring on this call of on_post, without delay:
i = 0
j = 4
additional_variable_synapse = 1.0
additional_variable_neuron_pre = 0.3303348210038741
additional_variable_neuron_post = 0.26682727510286663
Dumping all values occuring on this call of on_post, without delay:
i = 1
j = 4
additional_variable_synapse = 0.0
additional_variable_neuron_pre = 0.2046486340378425
additional_variable_neuron_post = 0.26682727510286663
Dumping all values occuring on this call of on_post, without delay:
i = 2
j = 4
additional_variable_synapse = 0.0
additional_variable_neuron_pre = 0.6192709663506637
additional_variable_neuron_post = 0.26682727510286663
Dumping all values occuring on this call of on_post, without delay:
i = 3
j = 4
additional_variable_synapse = 1.0
additional_variable_neuron_pre = 0.29965467367452314
additional_variable_neuron_post = 0.26682727510286663
Dumping all values occuring on this call of on_pre, with delay:
i = 4
j = 1
additional_variable_synapse = 1.0
additional_variable_neuron_pre = 0.26682727510286663
additional_variable_neuron_post = 0.2046486340378425
Dumping all values occuring on this call of on_pre, without delay:
i = 0
j = 2
additional_variable_synapse = 1.0
additional_variable_neuron_pre = 0.3303348210038741
additional_variable_neuron_post = 0.6192709663506637
Dumping all values occuring on this call of on_pre, without delay:
i = 0
j = 4
additional_variable_synapse = 1.0
additional_variable_neuron_pre = 0.3303348210038741
additional_variable_neuron_post = 0.26682727510286663
Dumping all values occuring on this call of on_pre, without delay:
i = 2
j = 1
additional_variable_synapse = 1.0
additional_variable_neuron_pre = 0.6192709663506637
additional_variable_neuron_post = 0.2046486340378425
Dumping all values occuring on this call of on_pre, without delay:
i = 2
j = 3
additional_variable_synapse = 1.0
additional_variable_neuron_pre = 0.6192709663506637
additional_variable_neuron_post = 0.29965467367452314
As you can see, we do no longer have lists/arrays, and events which were once bundled beforehand are now multiple single events.
For myself I think this mismatch between behaviour due to different code targets is completely unacceptable. If I had to choose, I would prefer the numpy style by a huge margin, as it allows for vectorized code.
If it matters: I've used brian2 version 2.2.2.1 and python 3.7.3
Thanks for looking into it!
This behaviour is as intended and mentioned in the documentation: https://brian2.readthedocs.io/en/stable/advanced/functions.html#arrays-vs-scalar-values-in-user-provided-functions
I don't really see a way around it, as all generated code except for numpy
uses a loop over neurons/synapses. Standard functions in C++ such as exp
are only acting on a single value (and not on an array or a vector), so changing this approach to something like a vectorized call for all neurons would be a massive change, and I'm not seeing a clear benefit. For debugging you can always switch to the numpy
target (you can also do this only for parts of your objects by setting the codeobj_class
argument to NumpyCodeObject
, although this is not well documented at the moment, unfortunately).
A final remark: that you can use debug_dump
with the Cython code generation target is actually somewhat a bug – you did not define a Cython implementation of this function, it only works "by accident" because Cython can call Python functions, but this is really not intended to be used this way.
Dear Marcel,
thank you for taking your time and reading through that wall of text. Due to recommendation of a concerned workmate I was actually working on making that more readable, but it seems you were faster.
My particular problem aside (I'm working on other angles of attack right now), would you consider a cleaned version of above code as a worthwhile addition to the documentation?
For me getting into brian2 seeing what happens here was certainly very enlightening. I'm sure other beginners would learn from this as well.
A concrete proposal of action: I clean and enhance the code a little bit further (make the neurons spike at handpicked times and even make two neurons spike at once, remove the additional_variable_* stuff because it doesn't help as much as I originally thought). Then we add that code to the examples section and link to that example in the paragraph you mentioned: https://brian2.readthedocs.io/en/stable/advanced/functions.html#arrays-vs-scalar-values-in-user-provided-functions
Does this sound reasonable?
Thanks again for your support,
Florian