hnn-core icon indicating copy to clipboard operation
hnn-core copied to clipboard

[WIP] subtract exponential to remove offset (a.k.a baseline_renormalize)

Open cjayb opened this issue 4 years ago • 14 comments

The PR is based on the gist shown here

The purpose is to demonstrate that examples that set postproc=True (default) in simulate_dipole do not change in any meaningful way (i.e., qualitatively this method does the job)

Tests will fail, however, as these are against the old piecewise-linear normalisation method.

Closes #220

cjayb avatar Feb 11 '21 16:02 cjayb

Ping #265

cjayb avatar Feb 11 '21 16:02 cjayb

The gist is updated to show that the exponential fits for both L2 and L5 dipole moments scale with network size, i.e., it makes sense to use the fit values multiplied by the number of cells in the network.

I think the origin of the (perfectly) exponential decay is membrane capacitance charging from an initial zero value. It makes sense that this would take a lot longer for the large L5 vs. L2 pyramidals. Note however that this will be input-dependent: as soon as synapses start activating, the time constants will change. So it's not feasible to remove the effect of capacitance by a simple exponential fit, see below.

In d70209c I've used values for the initial Vm of all pyramidal cell segments calculated as show in this gist. The dipole moment in L2 cells (right plot) jumps immediately to the steady-state value, but L5 cells (left plot) still require considerable time to stabilise (due to their size-capacitance product).

better_Vm

The exponential fit for the L5 cells now has slightly different values, but basically the same story as before

download-4

To illustrate my point about synaptic activity making the steady-state values invalid, look at the alpha-example output of the latest commit

Screenshot 2021-02-12 at 12 30 18

compared with master:

Screenshot 2021-02-12 at 12 30 34

Sure enough, the baseline is now lifted to 0, but the exponential-based correction over-compensates, presumably because open synapses due to the drives shunt current along the dendrites and soma.

In summary, I think this PR explains both the initial "ringing" of the dipole moment, and the slow low-amplitude drift. However, it also speaks against doing any form of implicit 'baseline renormalisation'. Note that even a constant-removal will depend strongly on the length of the simulation.

cjayb avatar Feb 12 '21 11:02 cjayb

@cjayb could we perhaps zoom to discuss all that you have explored here so far? I think this is all very interesting and valuable. Has it been discussed at any of the group meetings so far?

jasmainak avatar Feb 13 '21 05:02 jasmainak

My conclusion based on this "experiment" is that no form of 'baseline renormalisation' can be physiologically justified. The leak current down (and through) the L5 pyramidal cell dendrites is always going to depend on the input drives, so no constant, linear or exponential function is going to be appropriate.

This calls for a principled use of "baselining": on the short time scales of a simulation, simply removing a (50 ms-or-so) baseline would likely do wonders. A linear detrending could be optionally applied (an exponential with a 1/2.62e-3 ~ 390 ms time constant is very close to linear for short simulations).

I'm labelling this as a bug until a principled decision can be made on how to proceed.

cjayb avatar Feb 17 '21 22:02 cjayb

Thinking more about it, perhaps "burn-in" is indeed a better name than "baseline" if we want to discard first x ms

jasmainak avatar Feb 18 '21 04:02 jasmainak

@cjayb I'm looking at this PR again and also network_builder.py. It seems the values in state_init are hardcoded currently and that's a point of contention. Do you think it would make sense to add a state_init parameter to the Cell object that initializes the state based on a dictionary/list. That way, if you want to change the initialization based on more principled methods, it should be easy to do so.

jasmainak avatar May 07 '21 14:05 jasmainak

I'm beginning to think that the state_init is pretty useless: the problem isn't that seg.v's are too far from resting potential. Rather, the several overlapping mechanisms (hh2, ca, cat, ...) all have in-built state variables (m, n, ..., tau_this and tau_that). We need to figure out how to pass sane values for those states. I think it's possible, just haven't quite figured out how yet.

cjayb avatar May 11 '21 17:05 cjayb

That you can already ... well almost

p_secs[sec_name]['mechs']['ca']['gbar_ca'] = 1e-3

anything that's in the PARAMETER block should be possible to change in this way

jasmainak avatar May 11 '21 19:05 jasmainak

I think not, I'm pretty sure we need to use this with type=1. We want to set the internal state variables, not just the external parameter values.

cjayb avatar May 12 '21 15:05 cjayb

@cjayb I'm looking at this with fresh eyes. I agree with you that "baseline normalization" is not really justified. Instead we need to explain to users the concept of "steady state" in a dynamical system in a tutorial. With an optional "burn in" parameter somewhere.

And then in our tutorials, use an initialization through a method but mention the caveat that we have empirically observed this is similar to the "steady state" and are using this only because it "works" and to keep the simulation short. What do you think?

jasmainak avatar Dec 09 '21 20:12 jasmainak

@cjayb I'm looking at this with fresh eyes. I agree with you that "baseline normalization" is not really justified. Instead we need to explain to users the concept of "steady state" in a dynamical system in a tutorial. With an optional "burn in" parameter somewhere.

I'm not Chris, but I vote +1 for this!

And then in our tutorials, use an initialization through a method but mention the caveat that we have empirically observed this is similar to the "steady state" and are using this only because it "works" and to keep the simulation short. What do you think?

When you say "initialization through a method", you mean just rebrand the baseline normalization code and clarify that it's just a hack that we use in examples?

rythorpe avatar Dec 10 '21 06:12 rythorpe

I like both suggestions! I understood Mainak's comment as Ryan did. I feel a more simple ad hoc 'normalisation', such as an exponential, might be more appropriate than the piecewise-linear solution in there now. As long as it's documented it shouldn't really matter.

I think it still makes sense to investigate whether the internal state variables could be saved and set. If it turns out to be problematic, though, a multi-trial simulation should be done using a more M/EEG-like epoching approach (one long sim, with event markers). The latter might be preferable semantically anyway...

cjayb avatar Dec 10 '21 06:12 cjayb

I like the idea of subtracting a fitted exponential function as well. Are we cool with updating our ground truth default dpl.txt file?

rythorpe avatar Dec 10 '21 06:12 rythorpe

I'm fine with it. I'd say hnn-core is kind of become more the standard now

jasmainak avatar Dec 24 '21 05:12 jasmainak