nrn icon indicating copy to clipboard operation
nrn copied to clipboard

Neuron crashes and do not recover when voltage gets unrealistic

Open DrTaDa opened this issue 2 years ago • 7 comments

Context

Overview of the issue

When creating neuron models, it happens that some sets of parameters give unrealistic voltage behaviour. These unrealistic behaviours result in a crash of neuron.

Expected result/behavior

Neuron should not crash even when unrealistic voltages are reached, instead, it should escape correctly.

NEURON setup

  • Version: 8.0.0
  • Installation method: pip
  • OS + Version: Red Hat Enterprise Linux Server release 7.9 (Maipo)
  • Compiler + Version: gcc 4.8.5 (not used for compilation)

Minimal working example - MWE

In attached file, run crash.py.

reproduce_crash.zip

Logs

CVode-- Warning: Internal t = 1300 and h = 4.06649e-61
are such that t + h = t on the next step.
The solver will continue anyway.

CVode-- At t = 1300 and h = 1.55124e-66, the corrector
convergence failed repeatedly or with |h| = hmin.

CVode 0x99b530 <__main__.TestCell object at 0x7fffed9ebe50>.soma[0] advance_tn failed, err=-7.
err=-7
NEURON: variable step integrator error
 near line 0
 ^
        fadvance()
Traceback (most recent call last):
  File "crash.py", line 120, in <module>
    h.fadvance()
RuntimeError: hoc error

DrTaDa avatar Mar 15 '22 13:03 DrTaDa

There are two underlying issues here:

  1. This isn't the main problem but it's the one asked above: The integration problem can't be solved with the given hyperparameters. The solution here (and often in related problems) is simply to ask NEURON to give more accurate values so that things never get out of hand in the first place. There are multiple ways to do this, but the simplest is to set a lower global error tolerance by using CVode.atol

    In particular, you can add

     h.CVode().atol(1e-6)
    

    right before the

     h.cvode_active(1)
    

    That'll at least allow the integration to complete, but the real problem is:

  2. That's a really strong hyperpolarizing current (were you trying to provide a depolarizing current?). Even once we fix the integration as above, it'll drop the membrane potential below -5 V (not mV... V).

A few other comments and suggestions:

  • The global h.v_init is not used by h.finitialize. In particular then a call to

      h.finitialize()
    

    without any membrane potential to set everything to will not change the values. It so happens that the default is -65, so you don't see any issue here, but if you set h.v_init = -61 you'd still start at -65.

  • Instead of

      iclamp = h.IClamp(icomp.x, sec=icomp.sec)
    

    write the equivalent and shorter

      iclamp = h.IClamp(icomp)
    
  • If you know expressions for vector values in advance, it's simpler and more efficient to just create the vector with those values from the beginning, e.g.

      time_vec = h.Vector([0, delay, delay, stim_end, stim_end, totduration])
    
  • Arguably subjective, but: True and False are generally more readable than 1 and 0.

    i.e.

      h.cvode_active(True)
    

    and

      current_vec.play(iclamp._ref_amp, time_vec, True)
    

    (You don't need to specify the sec= argument here unless you're using threaded simulation...)

ramcdougal avatar Mar 15 '22 14:03 ramcdougal

I believe you can continue from that error if do the run from within a try: ... except: .... That's typically useful in optimization to throw away a failed run and return a very large error value.

nrnhines avatar Mar 15 '22 14:03 nrnhines

@ramcdougal thank you for the insights. Changing the atol parameter seems to work for now.

@nrnhines Indeed, we wrap our neuron execution in a try/except. It works well when running the code sequentially but still results in a core dump when using ipyparallel.

DrTaDa avatar Mar 16 '22 15:03 DrTaDa

I'm not familiar with ipyparallel. Our intention/expectation is that an error on the NEURON side, in consequence of a call into NEURON from Python, be recoverable in the sense that we return to python with a NULL value and a python exception raised. I think it would help if we can reproduce the issue.

When I was rummaging around with your model prior to my earlier message. I experienced no benefit from reducing the absolute tolerance (just an earlier failure). I did notice that with the DAE method and fixed step method did not generate the error with your default tolerances. I also noticed that the tolerance scale factor (default 1.0) for cai was generally inappropriate for a value which is normally restricted to a much smaller dynamic range. Lastly, I found it helpful in my rummaging to give a name to the TestCell instance so that I could easily use the standard GUI tools. I.e.

class TestCell:
    def __init__(self):
        ....

    def __str__(self):
        return "TestCell"

Of course, that last is suitable only for a single instance model.

nrnhines avatar Mar 16 '22 17:03 nrnhines

@nrnhines thank you for your reply. I will try to produce an example but it might take me some time as it is running on our HPC solution (bb5).

For the rest, I am sorry I am not very familiar with the inner working of Neuron, could you elaborate on "I also noticed that the tolerance scale factor (default 1.0) for cai was generally inappropriate for a value which is normally restricted to a much smaller dynamic range" ?

DrTaDa avatar Mar 17 '22 10:03 DrTaDa

Some documentation is https://nrn.readthedocs.io/en/latest/hoc/simctrl/cvode.html#CVode.atolscale In your case, something like

h.cvode.atolscale("cai", 1e-3) # micromolar scale instead of default millimolar

would suffice. (or you could specify the tolerance scale factor in your mechanisms/TC_cad.mod file as

STATE {
        cai             (mM)    <1e-6> : nanomolar for cvode atolscale
}

This is actually rarely important as voltage generally determines the cvode time step. But if voltage derivatives are constant over a long interval then cvode might choose a time step which allows a large error in the calculation of cai.

nrnhines avatar Mar 17 '22 11:03 nrnhines

FYI @wvangeit

DrTaDa avatar Mar 17 '22 12:03 DrTaDa