genn
genn copied to clipboard
Dangerous conflict potential of user and system code
I was working with pygenn and defined a connectivity init snippet like so:
ors_orns_connect = create_custom_sparse_connect_init_snippet_class(
"or_type_specific",
row_build_code=
"""
const unsigned int row_length= $(num_post)/$(num_pre);
const unsigned int offset= $(id_pre)*row_length;
for (unsigned int i= 0; i < row_length; i++) {
$(addSynapse, (offset + i));
}
$(endRow);
""",
calc_max_row_len_func=create_cmlf_class(
lambda num_pre, num_post, pars: int(num_post/num_pre))()
)
However, when running in the single-threaded CPU backend this was code generated into another loop that also used i
as its loop index:
// Synapse groups with sparse connectivity
{
// merged synapse connectivity init group 0
for(unsigned int g = 0; g < 1; g++) {
const auto *group = &mergedSynapseConnectivityInitGroup0[g];
memset(group->rowLength, 0, group->numSrcNeurons * sizeof(unsigned int));
for (unsigned int i = 0; i < group->numSrcNeurons; i++) {
// Build sparse connectivity
while(true) {
const unsigned int row_length= group->numTrgNeurons/group->numSrcNeurons;
const unsigned int offset= i*row_length;
for (unsigned int i= 0; i < row_length; i++) {
do {
const unsigned int idx = (i * group->rowStride) + group->rowLength[i];
group->ind[idx] = (offset + i);
group->rowLength[i]++;
}
while(false);
}
break;
}
}
}
}
This obviously did not work and ended in a segmentation fault.
I wonder whether as a rule, we should always use something like __i
or other convention for code-generated indices that are highly unlikely to be chosen by a user (this is what brian 2 does afaik).
Seems a good plan - otherwise these issues are pretty undebuggable
I've also noticed this bug, but it did not give me a segmentation fault, and the code ran. Essentially, the $(id) of the neuron gets messed up inside a for loop that uses i as a local variable. Trying to debug this using std::cout << $(id) << std::endl;
gave me some insight into this; it appears that GeNN replaces $(id) with i, referring to the neuron's index. However, this causes a problem inside a for loop that uses i as a local variable.
Here's a minimal example illustrating this:
from pygenn.genn_model import create_custom_neuron_class, GeNNModel
loop_bug = create_custom_neuron_class(
"loop_bug",
sim_code="""
// Print $(id) outside the loop
std::cout << "Outside loop $(id): " << $(id) << std::endl;
// Print $(id) inside the loop
for (int i=5; i<8; i++){
std::cout << "Inside loop $(id): " << $(id)<< std::endl;
}
std::cout << std::endl;
""",
)
model = GeNNModel("double", "loop_bug")
model.dT = 0.001
pop = model.add_neuron_population(
pop_name="pop",
num_neurons=2,
neuron=loop_bug,
param_space={},
var_space={},
)
model.build()
model.load()
for i in range(1):
model.step_time()
Here's the output I get (with comments added).
Outside loop i: 0 # correct $(id) of first neuron
Inside loop i: 5 # incorrect $(id)
Inside loop i: 6 # incorrect $(id)
Inside loop i: 7 # incorrect $(id)
Outside loop i: 1 # correct $(id) of second neuron
Inside loop i: 5 # incorrect $(id)
Inside loop i: 6 # incorrect $(id)
Inside loop i: 7 # incorrect $(id)
Changing the for loop's local variable from i to ii fixes this. As @tnowotny suggests, I will see if I can replace the code generated indices with __i etc.