mne-python
mne-python copied to clipboard
Q: Using OpenMEEG
I'd like to use OpenMEEG to generate forward solutions, especially for sEEG (#1623), under the assumption that MNE's forward module doesn't provide the same support for sEEG/internal potential as OpenMEEG (?).
Assuming I have the normal BEM surfaces, and additionally the positions of the sEEG electrodes, I'm able to get s/M/EEG lead fields, more or less by hand using the command line OpenMEEG tools.
Is this already available in MNE? I could find no reference to OM in MNE. If I make a PR for this, which is preferred, a) calling the command line utils b) using the Python module?
There is no integration of OpenMEEG into MNE yet but this is planned for 2015. There will be some help coming soon. But any progress you make already will make things go faster.
You should do it via python bindings.
thanks @maedoc !
ping @souravamishra
I've had some trouble constructing OpenMEEG objects directly with the SWIG based API (sometimes segfaults as result), and the examples all seem to use the file-based constructors. Is this normal?
I've had some trouble constructing OpenMEEG objects directly with the SWIG based API (sometimes segfaults as result),
oups :-/
and the examples all seem to use the file-based constructors. Is this normal?
have a look at:
https://raw.githubusercontent.com/openmeeg/openmeeg_sample_data/master/compute_leadfields.py
have a look at:
This is what I meant, the objects like HeadModel are constructed by passing the name of a file, where I would have expected a Python API allowing it to be constructed explicity via __init__ & regular methods.
OTOH this is probably the most well tested way of using OpenMEEG no?
python bindings
I won't bother about this a third time, but won't this introduce a binary dependency among platform, python & numpy versions? Could be a headache for distribution, compared to just "make sure OpenMEEG is in $PATH".
I'm rounding out a first version, assuming the 3 layer BEM, and will send a PR to know what is preferred for API etc.
Also capturing stdout of subprocess is easier than a shared library with stuff like cout << "error". But I don't know how OpenMEEG does that.
point taken although it's a bit disappointing to have a file based approach. If you can have everything in memory it's nicer and more efficient.
I did not think about the std::cout usage... no experience with that.
@souravamishra should soon be able to help improv the binding.
don't hesitate to type the few lines you would like to be able to write to do the job.
@maedoc let me know if you have some data I could treat with
Test, not treat (silly mobile)
@Eric89GXL there are several epilepsy s/M/EEG data sets but would require clinical release. Either I could run tests myself or ask the right person about how to pass the data along. What do you have in mind?
Ultimately we need to have some unit tests, and ideally some example showing the functionality. Maybe we can use the sample dataset structural to fake it so we don't have to have real data -- and then you could validate it actually works for your restricted datasets. But at least I'd need some channel definitions. Not sure how the locations are stored in your data or how they will interface with OpenMEEG yet, so we'll have to figure it out.
If simulated data would work, then I'll be able to put it together quickly.
@agramfort wrt binding OpenMEEG I think a Cython module which provides a minimal in-process API would be nicer than the SWIG approach which assumes head model & surfaces are stored in files. It would statically link OpenMEEG libs and be packed into per platform wheels. wdyt?
I'd love this but question is who will do this? do you have time to give this a try?
otherwise it will have to wait I hire someone to do this... might be in a few months... or more.
I'm at a point where I need this to work, and I can implement what I mentioned above over the next few weeks, but I think it's worth some discussion of use cases. My current workflow is something like
- format data from FreeSurfer recon
- generate interfaces with
mne_watershed_bem - reduce vtx/face count
mris_downsample -d 0.1 - load each BEM surface,
lh.pialandrh.pialw/ nibabel, write out brainvisa format - generate subcortical grid sources
subcortical.dipfromaseg.mgz
- generate interfaces with
- align sensors with T1 space, move surfaces & subcort grid sources to sensor coord sys
- make volume conduction model
om_assemblehead modelom_minverserhead matrix
- make source models
om_assemble -DipSourceMat head_model.{geom,cond} subcortical.{dip,dsm}om_assemble -SurfSourceMat head_model.{geom,cond} cortical-$hemi.{tri,ssm}
- make sensor models
om_assemble -h2em head_model.{geom,cond} EEG.sensors EEG.h2emom_assemble -h2mm head_model.{geom,cond} MEG.sensors MEG.h2mmom_assemble -h2ipm head_model.{geom,cond} $seeg_sensor_file seeg.h2ipm
- make seeg gain matrices
om_assemble -ds2ipm head_model.{geom,cond} cortical-$hemi.dip $seeg_sensor_file seeg-$hemi.ds2ipmom_gain -InternalPotential head-inv.mat cortical-$hemi.ssm seeg.h2ipm seeg-$hemi.ds2ipm seeg-$hemi.gain.mat
- make meg/eeg gain matrices
om_gain -$modality head-inv.mat $source $sensor_model.* $sensor_model.gain.mat
This all assumes that FreeSurfer recon is done and mne_watershed_bem is available.
Is that reasonable in general for MNE? A more minimal assumption is that sensor aligned
BEM surfaces can be provided.
At least, I can start on wrapping some of the C++ APIs which would be required to replace the om_* calls.
wdyt?
thx @maedoc for the update.
the first section on format data from FreeSurfer recon should ideally be done only with mne tools and we should not rely on brainvisa formats by passing numpy arrays to openmeeg directly in memory in python.
cc @papadop @eolivi @mclerc
we should not rely on brainvisa formats by passing numpy arrays
I don't want to replicate that here, it's just what I was doing in the past. Ideally we'd use nibabel to load the formats, etc.
let's make it work first then we'll make it nice :)
I've not done anything here since we're focused on the sEEG currently with lots of subcortical structures, where we don't have source orientation info, so we use a simple 1/r^2 rule 🙉 .
OpenMEEG for mne-python is on our agenda, as soon as we get the resources...
Maureen ----- Mail original -----
| I've not done anything here since we're focused on the sEEG currently with | lots of subcortical structures, where we don't have source orientation info, | so we use a simple 1/r^2 rule 🙉 .
| — | You are receiving this because you were mentioned. | Reply to this email directly, view it on GitHub , or mute the thread .
@mclerc @agramfort friendly question: have there been any updates on MNE-Python <-> OpenMEEG?
no :( :(
@papadop and @mclerc it would really be nice to move forward here. We've had some engineering budgets to do this for some months now. Any progress on this?
FYI there is some progress being pushed here in https://github.com/openmeeg/openmeeg/pull/443 and https://github.com/conda-forge/openmeeg-feedstock/pull/18
... the TL;DR of those is basically that support will hopefully be added to MNE-Python in the next couple of months, starting with:
- Just macOS + Linux via conda-forge
- Add support for Windows via conda-forge
- Add support for PyPi
As part of (1) we'll make a PR to MNE-Python to hopefully wrap nicely to OpenMEEG such that you can pass it standard MNE-Python objects and get back an instance of Forward just like with make_forward_solution (and maybe even via that function, we'll see!).
it's now possible thanks the hard work of @larsoner to install openmeeg with conda or pip on all 3 platforms 🎉
it means that the https://github.com/mne-tools/mne-openmeeg project can be resurrected to offer alternative forward models for MEG/EEG but importantly iEEG.
@maedoc do you have a bit of bandwidth to help here? Otherwise I'll restart the ball after my summer break.
Thanks to all of you who put a lot of effort in putting OpenMEEG into better shape, much appreciated ! (my role was limited to pinging @papadop when his notifications had broken down...) The MNE perspective sounds appealing !
----- Le 2 Aoû 22, à 8:29, Alexandre Gramfort @.***> a écrit :
| it's now possible thanks the hard work of [ https://github.com/larsoner | | @larsoner ] to install openmeeg with conda or pip on all 3 platforms 🎉
| it means that the [ https://github.com/mne-tools/mne-openmeeg | | https://github.com/mne-tools/mne-openmeeg ] project can be resurrected to offer | alternative forward models for MEG/EEG but importantly iEEG.
| [ https://github.com/maedoc | @maedoc ] do you have a bit of bandwidth to help | here? Otherwise I'll restart the ball after my summer break.
| — | Reply to this email directly, [ | https://github.com/mne-tools/mne-python/issues/1624#issuecomment-1202071858 | | view it on GitHub ] , or [ | https://github.com/notifications/unsubscribe-auth/AAKTX3HGAPNO5KCSUBJY5STVXC53NANCNFSM4AWKFH6A | | unsubscribe ] . | You are receiving this because you were mentioned. Message ID: | @.***>
iEEG is an important use case for our team. I could look at this after the summer break also.
Thanks for this great effort - excited about it
I however get this error, when I follow the instructions on https://github.com/mne-tools/mne-openmeeg:

mne-openmeeg still needs to be written with the new openmeeg binaries.
I'll have a look later this month
there is nothing to see here at this point
Message ID: @.***>