mne-python
mne-python copied to clipboard
ENH: sEEG and ECoG sensor sizes are too large by default, should be configurable
Describe the new feature or enhancement
The mne.viz.Brain functionality has improved a lot lately, and I have a request to make it even better. The current size of the sensors when you use brain.add_sensors()
to an ECoG or sEEG visualization is quite a bit bigger than the actual sensors are in reality, which I think actually makes it difficult to appreciate where they are in space and makes them overlap in a way that is not aesthetically pleasing. I have two suggestions, one really easy, and one maybe more complex.
- I would like to suggest an overall change to the global default size for sEEG and ECoG electrodes in this visualization.
- I suggest another keyword argument to
brain.add_sensors()
that would allow for adjusting the electrode size on an individual basis instead of the whole montage. This can be a nice way of showing visually the size of different effects on each electrode (for example, make the electrode bigger if it's response is larger).
Describe your proposed implementation
-
For 1, I propose to change the default size in
defaults.py
* -- fordefaults['coreg']['ecog_scale']
and['seeg_scale']
I suggest2e-3
instead of5e-3
. Typically ECoG electrodes might be anywhere from 1-2.5 mm in diameter, and sEEG electrodes closer to 0.8-1mm. The spacing between contacts can vary between a few millimeters to a centimeter, so the scale of5e-3
(5mm) often will make them overlap for a denser montage. I've already made this change in my own fork. -
For 2, I would add something to
add_sensors()
, but I want to make sure it wouldn't break anything else, so I'm open to alternatives.
Picture attached of what the new (smaller) size looks like:
Describe possible alternatives
N/A
Additional context
No response
Both points sound good to me. FWIW I don't think (2) will actually be too bad since:
- Internally we should have the viz mechanics for it since we scale dig points individually during coreg to show how far from the scalp they are, and
- We already have a public API for setting channel colors individually using the
sensor_colors
arg so we can probably just havesensor_sizes
follow the same mechanics hopefully
Feel free to jump right to (2) if you want, but (1) could be a separate PR if you want an easier one to start :)
I made the (1) change already so happy to send a PR for that, but you can assign me to (2) if you want!
@alexrockhill including you here if you have any thoughts so I don't break anything. :)
And then this is just related to a plug for adding sensor_sizes
for visualizations in case folks want to see what that would look like. Even if it's not realistic, you can do cool stuff like this where electrode size is related to its selectivity. This is from a different web viewer I've worked on where the color represents audiovisual selectivity and size is how predictable the electrodes are from an encoding model. If they're small and black, they are electrodes that were recorded but not very responsive to our task. Here I also have small lines linking electrode channels belonging to the same "device".
Regarding part (2) of making configurable sensor sizes, I have a question. I have code working on my fork that includes sensor_sizes
as a new keyword argument for brain.add_sensors()
in the same way that sensor_colors
is used. I was trying to make sure this won't break anything else, but I don't see any other tests for the functionality for the brain viewer stuff in mne/tests
or mne/viz/tests
. They are used in tutorials/clinical
, however. What's best practice here for this contribution -- should I write a new tutorial, append to an existing one, etc? @larsoner
Other people might have opinions but one option could be to add a section to examples/visualization/brain.py
.
EDIT: Or you could just add it naturally in the seeg
tutorial, that might be more visited and consequently more helpful. Just adding a section that explains how to use this new functionality like the inflated + flatmaps section that I added from adding that feature with Dora.
I don't see any other tests for the functionality for the brain viewer stuff in mne/tests or mne/viz/tests
They're nested a bit deeper and git grep
is your friend here:
$ git grep "\.add_sensors"
...
mne/viz/_brain/tests/test_brain.py: brain.add_sensors(info, trans=fname_trans)
mne/viz/_brain/tests/test_brain.py: brain.add_sensors(info, trans=fname_trans)
mne/viz/_brain/tests/test_brain.py: brain.add_sensors(info, trans=fname_trans)
mne/viz/_brain/tests/test_brain.py: brain.add_sensors(proj_info, trans=fname_trans)
...
Other people might have opinions but one option could be to add a section to examples/visualization/brain.py ... Or you could just add it naturally in the seeg tutorial, that might be more visited and consequently more helpful.
Yes if it fits naturally and shows something cool in an existing tutorial or example please add it there. For example perhaps this section:
https://mne.tools/stable/auto_tutorials/clinical/30_ecog.html#plot-gamma-power-on-cortical-sensors
Maybe you could show how size could be used instead of or in addition to colors.
For example perhaps this section:
https://mne.tools/stable/auto_tutorials/clinical/30_ecog.html#plot-gamma-power-on-cortical-sensors
Maybe you could show how size could be used instead of or in addition to colors.
+1, that seems like a good place to demo this!
Well shoot, I did do the grep command but apparently not deep enough, thanks @larsoner ! And @alexrockhill and @drammock thanks for the other suggestions as well. Agree that that looks like a good spot!
If you want to pair code for an hour or so and put together a PR to port over the changes you already have working locally and plan future changes, let me know, happy to help