BrainPy icon indicating copy to clipboard operation
BrainPy copied to clipboard

Sensitivity of Runge-Kutta methods and Exponential Euler methods on the choice of integration timestep

Open CloudyDory opened this issue 1 year ago • 3 comments

This may be less of an issue and more of a discussion. In BrainPy's documention on numerical solvers, it is mentioned that

... we highly recommend you to use Exponential Euler methods. ... for such linear systems, the exponential Euler schema is nearly the exact solution.

This gives me the impression that Exponential Euler methods are better than the Runge-Kutta methods. However, I have checked how each numerical solver of the Hodgkin-Huxley model behaves under different integration timesteps, and I find a different result.

image

As we can see, the Runge-Kutta methods are quite stable with respect to the choice of dt. Yes, they do fail earlier than the Exponential Euler methods when dt gets larger; but when they work, they seems to be more robust against the change in dt. I know that other simulation software, such as NEURON, also recommands against the Runge-Kutta methods. But can this be a reason that favors Runge-Kutta methods over other methods?

I am using BrainPy 2.4.5. The full simulation code is attached below:

import jax
import brainpy as bp
import brainpy.math as bm
import matplotlib
import matplotlib.pyplot as plt

#%% Environment configurations
jax.config.update("jax_enable_x64", True)
bm.set_platform('cpu')
print('Brainpy version: {}'.format(bp.__version__))

matplotlib.rc('font', family='Arial', weight='normal', size=16)

#%% HH model
Iext=10.;   ENa=50.;   EK=-77.;   EL=-54.387
C=1.0;      gNa=120.;  gK=36.;    gL=0.03

def dm(m, t, V):
    alpha = 0.1 * (V + 40) / (1 - bm.exp(-(V + 40) / 10))
    beta = 4.0 * bm.exp(-(V + 65) / 18)
    dmdt = alpha * (1 - m) - beta * m
    return dmdt

def dh(h, t, V):
    alpha = 0.07 * bm.exp(-(V + 65) / 20.)
    beta = 1 / (1 + bm.exp(-(V + 35) / 10))
    dhdt = alpha * (1 - h) - beta * h
    return dhdt

def dn(n, t, V):
    alpha = 0.01 * (V + 55) / (1 - bm.exp(-(V + 55) / 10))
    beta = 0.125 * bm.exp(-(V + 65) / 80)
    dndt = alpha * (1 - n) - beta * n
    return dndt

def dV(V, t, m, h, n, Iext):
    I_Na = (gNa * m ** 3.0 * h) * (V - ENa)
    I_K = (gK * n ** 4.0) * (V - EK)
    I_leak = gL * (V - EL)
    dVdt = (- I_Na - I_K - I_leak + Iext) / C
    return dVdt

hh_derivative = bp.JointEq([dV, dm, dh, dn])

#%% Run the simulation
dt = [0.005, 0.01, 0.02, 0.04]
integrate_method = ['rk2', 'exp_euler']
monitor = {'ts':[], 'V1':[], 'V2':[]}

for i in range(len(dt)):
    bm.set_dt(dt[i])
    
    Cell1 = bp.odeint(hh_derivative, method=integrate_method[0])
    Cell2 = bp.odeint(hh_derivative, method=integrate_method[1])
    runner1 = bp.IntegratorRunner(Cell1, monitors=['V'], inits=[0., 0., 0., 0.], args=dict(Iext=Iext), dt=dt[i])
    runner2 = bp.IntegratorRunner(Cell2, monitors=['V'], inits=[0., 0., 0., 0.], args=dict(Iext=Iext), dt=dt[i])
    
    runner1.run(300.0)
    runner2.run(300.0)
    
    monitor['ts'].append(runner1.mon['ts'])
    monitor['V1'].append(runner1.mon['V'].squeeze())
    monitor['V2'].append(runner2.mon['V'].squeeze())

#%% Plot the results
plt.figure(figsize=(12,8))
plt.subplot(2,1,1)
for (t,V,delta_t) in zip(monitor['ts'], monitor['V1'], dt):
    plt.plot(t, V, label='dt={} ms'.format(delta_t))
plt.xlabel('Time (ms)')
plt.ylabel('Voltage (mV)')
plt.grid('on', linestyle='--')
plt.legend(frameon=False)
plt.title(integrate_method[0])

plt.subplot(2,1,2)
for (t,V,delta_t) in zip(monitor['ts'], monitor['V2'], dt):
    plt.plot(t, V, label='dt={} ms'.format(delta_t))
plt.xlabel('Time (ms)')
plt.ylabel('Voltage (mV)')
plt.grid('on', linestyle='--')
plt.legend(frameon=False)
plt.title(integrate_method[1])

plt.tight_layout()
plt.show()

CloudyDory avatar Oct 07 '23 06:10 CloudyDory

This is a very good comparison. This demonstrates that we need more accurate Exponential numerical methods for integrating such complex dynamics.

Welcome contributions.

For the question, can this be a reason that favors Runge-Kutta methods over other methods? RK methods are usually more computationally expensive than the Exponential Euler method. We still recommend the exp euler.

chaoming0625 avatar Oct 08 '23 05:10 chaoming0625

Here are the recommandations of numerical solver on NEURON forum. Does BrainPy have something similar to NEURON's "cnexp" solver?

CloudyDory avatar Oct 08 '23 07:10 CloudyDory

Thanks for this information. I have googled and did not find any information about what are cnexp and derivimplicit methods.

But i found two tutorials which may be useful in the future for us for developing such two numerical methods:

  • https://bluebrain.github.io/nmodl/html/notebooks/nmodl-sympy-solver-derivimplicit.html
  • https://bluebrain.github.io/nmodl/html/notebooks/nmodl-sympy-solver-cnexp.html

chaoming0625 avatar Oct 08 '23 12:10 chaoming0625