mne-python icon indicating copy to clipboard operation
mne-python copied to clipboard

interactive sensor-level field patterns browser

Open wmvanvliet opened this issue 5 years ago • 6 comments

Our interactive source estimate viewer is so awesome, it makes me wish for an equally awesome sensor level viewer. For this, the venerable proprietary "Xfit" program that ships with the MegIn scanner may serve as inspiration.

With MEG, the raw sensor timecourses are quite difficult to interpret: gradiometers and magnetometers measure the field in different directions. The choice made by Xfit to make the field patterns the central visualization is a good one. See a short screencast of Xfit in action, here:

https://gfycat.com/jointaromatichorsemouse

For MNE-Python, I envision an interface similar to the source viewer, but with the field patterns instead of the pysurfer brain. On the bottom, we have the GFP timecourse.

What would be super nice would be the ability to "drop a dipole". Similar to clicking a vertex in the source viewer, have the ability to fit a dipole at the currently selected time, and see the dipole timecourse.

As extra bonus, it would be great if we would select a subset of channels and view the field patterns generated by only those sensors, and fit a dipole using only those sensors.

wmvanvliet avatar Oct 19 '20 08:10 wmvanvliet

maybe a challenge for @GuillaumeFavelier ?

really cool video. It helps a lot to demo.

agramfort avatar Oct 19 '20 08:10 agramfort

really cool video. It helps a lot to demo.

It looks cool! :+1:

When I have the cycles this week, I will prototype something but I might need help to guide me though :sweat_smile:

GuillaumeFavelier avatar Oct 19 '20 10:10 GuillaumeFavelier

Some aspects of this are a special / enhanced case of "showing sensor data with the brain" -- we discussed this before at some point but I can't remember where. Along those lines I would propose (probably in PR-by-PR order):

  1. A brain.add_sensor_data, where MEG sensors and EEG sensors are shown as in plot_alignment, but get colors according to their activity level. This is basically "just" parts of plot_alignment with some color mapping and time-point-updating built on top.
  2. Some sort of surface mapping option, that allows you to map MEG sensor data to the MEG helmet (what @wmvanvliet is talking about) and EEG sensor data to the scalp, each with field map lines. This is "just" the functionality of plot_evoked_field mapped into the Brain object (plus interactivity).
  3. GUI options for toggling these sensors and surface mappings on the fly.

Each of these I'd expect just to be 200-300 lines of code. If this makes sense to everyone, @GuillaumeFavelier feel free to move these points to #7162, and then we can discuss what I think is the trickier part:

  • Some way to "place a dipole" and do mne.dipole_fit with it

What interaction allows us to define it, clicking? Do we constrain to a surface if a surface is shown? Is it allowed in volume mode? How do we add/delete, etc.? @wmvanvliet hopefully you have some ideas here. It's a great use case but not as straightforward to implement in my mind compared to the other things above.

The idea of using Brain to do some sort of interactive dipole fitting is cool, but it almost seems like a sufficientlry different use case to justify a separate class or GUI (that uses Brain), like we have for coreg. Doing a single (or multi?) interactive dipole fit on the data involves some different choices from visualizing already-localized activations, such as BEM/cov/trans/src (if you want one or not), etc., so it might make sense to separate out the functionality a bit. EDIT: same applies to sensor subselection for fitting, this is a great case for a different class and UI.

larsoner avatar Oct 19 '20 14:10 larsoner

cc @adam2392 the above three points at least are relevant for sEEG + ECoG as well

larsoner avatar Oct 19 '20 15:10 larsoner

Dipole fitting certainly is a tricky use case (and the main reason Xfit exists) that could certainly use a dedicated GUI.

I don't see how coloring sensors to their value is very useful. Sure it is "easy" to implement, but if it's not super useful it's not worth doing. The field maps are the really useful thing. Interesting idea to add this to the Brain object.

wmvanvliet avatar Oct 19 '20 16:10 wmvanvliet

I don't see how coloring sensors to their value is very useful. Sure it is "easy" to implement, but if it's not super useful it's not worth doing.

It's been our ECoG example for quite some time

larsoner avatar Oct 19 '20 16:10 larsoner

@wmvanvliet I think you added this so closing!

larsoner avatar Sep 25 '23 13:09 larsoner

pretty much. "just" the dipole fitting left :)

wmvanvliet avatar Sep 25 '23 19:09 wmvanvliet