hnn-core icon indicating copy to clipboard operation
hnn-core copied to clipboard

look into functions for uncertainty quantification and sensitivity analysis

Open jasmainak opened this issue 3 years ago • 5 comments

Leaving an open issue so we can get to it

jasmainak avatar May 12 '21 12:05 jasmainak

I am always surprised by how much applied math is influenced by meteorology. I've been thinking about implementing this sort of analysis by calculating the gradient of model output w.r.t. parameter changes. With some light digging this seems to formalized under "adjoint models": http://twister.caps.ou.edu/OBAN2019/Errico_BAMS_1997.pdf

ntolley avatar May 14 '21 15:05 ntolley

Didn't read the paper yet, but the gradient based sensitivity/uncertainty analysis was also done by Matti in some old papers on dipole estimation. What would be really killer is if you could propagate the uncertainties in the dipole estimate to the uncertainties in the HNN parameters.

Do you know how much computation these "adjoint models" require?

jasmainak avatar May 14 '21 15:05 jasmainak

This is tangential to the present discussion of how to do sensitivity analysis on our connected networks. This preprint is super-exciting because it would allow us to do a detailed analysis of our cells, and perhaps to tune them according to the brain region they're supposed to represent. It should drastically reduce the parameter space for network-level sensitivity analysis (although there's still going to be plenty left in the connectivity structure).

cjayb avatar May 15 '21 08:05 cjayb

It looks like they focus on single cells in this paper since the Blue Brain team has tried their hand at GPU accelerated networks: https://github.com/BlueBrain/CoreNeuron

I do look like the idea of constraining the parameter space to physiologically plausible single cell behavior.

ntolley avatar May 15 '21 14:05 ntolley

Also I think the computation (and the quality of the adjoint model) will entirely depend on how many are parameters being explored, the desired resolution (and of course the length of the simulation). At the end of the day I think the technique boils down to running batch simulations over a range of parameters, and characterizing the impact on simulated activity.

One way this might be sped up/optimized is creating checkpoints where gradients are calculated on a small set of simulations, and the next batch of simulations are concentrated on regions of the parameter space with the largest gradients.

ntolley avatar May 15 '21 14:05 ntolley