mne-python
mne-python copied to clipboard
Adds MRI gradient removal to preprocessing
Reference issue
Example: Fixes #10466
What does this implement/fix?
Adds a preprocessing GradientRemover class that can be used to perform MRI-EEG gradient artifact correction with templates.
Additional information
I just added my local, functional code to mne, I'm sure that there are some things that you'd like me to change to better integrate into the code base. All ears on other improvements to be made. I'm also not sure the best way to provide data with example corrections. One note is that I left get_tr_* functions in there so that you can quickly view a few sample corrections without performing correction on the whole data set; a normal user would almost certainly never use them. Tests check for validity of arguments and certain failure modes, rather than validate template behavior.
Developed in collaboration with @pmolfese
Let's try to avoid the object then. thanks
Message ID: @.***>
We actually really need it to integrate for real-time. Would you want two implementations? One for processing offline and one for online?
Let's avoid 2 implementations then. If have other classes in preprocessing eg. ICA so why not. But can you make sure the example you will add to demo the feature documents the API of the new object? :pray:
Message ID: @.***>
Sure, we can add examples. At the moment they're not there because they require data and I'm not sure the best way to package and add data to the project. But once we have data on there we will include some demos and more details because it will make more sense once there are plots and such.
We can put the data on osf like we do for other datasets
It's even better if you @jbteves put it on OSF under your own org/group account. Ours (MNE) is getting close to their free storage limit...
@jbteves can you share some data or simulate some data so we can test and iterate on the proposed code? I can find some time to help but I need to start to work on a working example.
@agramfort - We are close to sharing data in a permanent way. Currently trying to resolve an issue where the EGI raw import seems to get the TR timing incorrect compared to the EGI event export information. Our code currently works around this by asking the user to provide a list of TR times, but obviously the ideal solution would be to read the TR timing directly from the eeg file.
Sorry I was out for a while. I took a look and I believe what is happening is a floating-point rounding error. There should not really be times when the gradients hows up on the "wrong" sample unless the gradients are drifting in time (which could be the case! But shouldn't be. In my experience with EGI format this happens if you use the events from an mff file, as the way it seems to work is to convert a datetime into total seconds (floating-point) rather than track microseconds and seconds separately as integers. I've confirmed this by manually examining the "evt" files that can be generated with NetStation and looking through some of the XML. @agramfort and @larsoner is there a way we could modify the event-reading behavior for egi files to store as number of samples, or perhaps integer numbers of microseconds + integer number of seconds? It looks in the brainvision.py file as if these are stored as integer number of samples, which is preferred from my POV (though I could be wrong). It appears that mffpy stores the event times as a datetime, which should allow fetching the microseconds as an integer and allow exact sample calculation, and is already a dependency, so perhaps that method of recording events would work as a drop-in substitute?
@agramfort and @larsoner is there a way we could modify the event-reading behavior for egi files to store as number of samples, or perhaps integer numbers of microseconds + integer number of seconds?
It sounds like you're saying that our current way of storing events (annotations onsets?) for EGI loses precision. If I'm reading that correctly, yes we should ideally fix it to keep it as int sec + int microsec.
Not sure how nicely that will play with the Annotations class since onset is a float, but maybe since that's relative to the meas_date of the recording in practice it won't have these problems (due to the int number of seconds being quite large).
@jbteves can I have a file and a script to replicate the problem?
This new functionality is probably related to the fix_stim_artifact function, that uses a similar technique to get rid of the nerve stimulation artifact.
@agramfort sorry for the delay. I can get you a file but due to NIH policy I must individually add your email address for you to view it. Please feel free to send this to me privately via [email protected]
Please see this notebook which pairs with the existing file: https://gist.github.com/jbteves/ccb67171322413b3e76663ecc99f0602
@jbteves I was expecting to have in the gist a full code snippet that demo the use of this PR ie with GradientRemover etc.
can you update your gist? basically I am offering to do a pass on this PR but I need working gist to make sure I don't break anything.
@agramfort sorry for the confusion. I'll put together a more complete demo for you next week. I thought you were asking for a demo of the mis-timings.
Actually I lied, the notebook is updated to use data that isn't bad. It turns out I had mistakenly uploaded a botched data set, sorry about that. You can see at the end that the data is corrected as a numpy array.