hnn-core
hnn-core copied to clipboard
BUG: MPI simulations break for `dt` of certain size
As the title states, MPI simulations fail under some specific conditions. Oddly it only breaks when dt
is too large
I found dt=0.1
breaks, but dt=0.05
passes for the code below
Also it may or may not be related to #662 but this issue only comes up for the very particular case of adding a poisson drive with cell_specific=False
, other drives do not evoke this error.
net = jones_2009_model()
weights_ampa_noise = {'L2_basket': 0.01, 'L2_pyramidal': 0.002,
'L5_pyramidal': 0.02}
net.add_poisson_drive('noise_global', rate_constant=2.0, location='distal',
weights_ampa=weights_ampa_noise, space_constant=100,
n_drive_cells=1, cell_specific=False)
with MPIBackend(n_procs=2):
dpl = simulate_dipole(net, tstop=100)
A quick search to the issue suggests that we should put p.parallel.set_maxstep(10)
somewhere to resolve this?
https://github.com/neuronsimulator/nrn/issues/933
We actually do call ParallelContext.set_maxstep()
on each rank here. (Note that ParallelContext.set_maxstep()
will set the max step interval between synchronizations to the minimum netcon delay, not necessarily to the upper maximum of 10
entered as an arg in this function.)
I suspect that you've actually created a scenario that is incompatible with integration across parallel machines: your space_constant
is set to a large value and thus gives you a very small delay between cells that cannot be resolved between parallel machines with a large dt
. See Michael Hines's comment here. @nrnhines is there a way around this, or is the solution here to put bounds on our user-accessible variables governing delay and integration time step?