EEG-Clean-Tools
EEG-Clean-Tools copied to clipboard
Bad-by-dropout channels ignored during re-referencing?
Was reading through the PREP re-referencing code while working on PyPREP, and noticed something that looks to be a bug in updateBadChannels.m
:
https://github.com/VisLab/EEG-Clean-Tools/blob/3ed337e83dfaaad2c8e3ecb671a44a21c5b288c0/PrepPipeline/utilities/updateBadChannels.m#L1-L23
Specifically, dropout channels aren't included in the ref.all
union at the end, meaning that they're never excluded/interpolated during re-referencing. Additionally, ref.badChannelsFromRansac
is included in the union twice, which makes it look like bad-by-dropout channels were meant to be included but got copy/pasted incorrectly.
Thanks in advance for taking a look!
As an addendum to this, I also noticed that bad-by-dropout channels aren't added to the initial set of "bad" channels during re-referencing along with bad-by-NaN and bad-by-flat channels:
https://github.com/VisLab/EEG-Clean-Tools/blob/3ed337e83dfaaad2c8e3ecb671a44a21c5b288c0/PrepPipeline/utilities/robustReference.m#L37-L42
Since the initial average referencing would add signal to the previously flat "dropout" regions, however, wouldn't their lack of inclusion in that first pre-reference noisy detection pass prevent them from being detected later, except maybe indirectly by the bad-by-correlation or bad-by-RANSAC detectors?
Oh, and related to these: in the same updateBadChannels
code above, bad-by-SNR channels aren't added to ref.all
every loop either.
On its face this wouldn't be an issue, since bad-by-SNR channels are just channels bad by both HF noise and low correlation, but a potentially unexpected consequence of this is that bad-by-SNR channels are added to the initial referenceOut.badChannels
state along with bad-by-NaN and bad-by-flats, but are then immediately removed from the set of all bad channels on the first pass of updateBadChannels
.
In one of our test files there's a channel that's initially bad-by-SNR that's no longer bad by any metric after initial referencing, so it ends up being considered good enough to use to estimate the signals of bad channels during interpolation while still being considered too bad to include in the average reference calculation! I'm not sure what the correct behavior here should be, but this seems unintentional.
The updateBadChannels problem mentioned at the beginning of this thread is an issue and will be fixed in version 0.56.0. (Thanks for finding.....)
I don't think the second issue mentioned here is an actual bug but a "feature". Robust referencing is a "chicken-and-egg" situation. You can't remove additive noise without a good "true" average reference, but you can't get a "true" average reference without removing additive noise. The problem is how to get the process started.
The second thing to notice here is that identifying "bad" channels is not exactly the same thing as getting the channels needed to compute a good reference. The phenomenon you mentioned one of the things that Prep is trying to accomplish --- not rejecting channels that shouldn't necessarily be rejected.
The initial set of channels only rejects channels that are "globally" bad --- that is have globally large amplitudes, poor overall correlation, or have signals with NANs or zeros. The drop-outs is a low-amplitude in a sufficient number of windows feature. This can be sensitive to the removal of additive noise. Ransac is also very unstable to noise and it is not used in the initial setup either.
On each iteration, a completely new set of bad channels is computed once the process has started. It is not uncommon for channels that were initially bad to be "okay" once the robust reference is subtracted out.
The second thing to notice here is that identifying "bad" channels is not exactly the same thing as getting the channels needed to compute a good reference. The phenomenon you mentioned one of the things that Prep is trying to accomplish --- not rejecting channels that shouldn't necessarily be rejected.
If this is in reference to my 3rd post about bad-by-SNR, I think I should clarify: what we observed was PREP rejecting a channel that shouldn't necessarily be rejected, since the 'bad-by-SNR' channel was no longer bad by any metric after the initial re-reference, but was still interpolated at the end because it was included in the list of "unusable" channels (which get interpolated no matter what). If the channel remains "good" after initial average referencing, and it's not inherently unusable in the same way that flat and NaN-containing channels are, wouldn't we want to retain it during final interpolation?
The initial set of channels only rejects channels that are "globally" bad --- that is have globally large amplitudes, poor overall correlation, or have signals with NANs or zeros
When we reimplemented this code for PyPREP, the MATLAB PREP code seems to only exclude bad-by-NAN, bad-by-flat-signal, and bad-by-SNR (i.e. bad by both high-frequency noise ratio and by having more than 1% low correlation windows). Were bad-by-deviation channels meant to be included in that initial set as well?
Austin, Thanks for the clarification and your patience. I will look at it again, but I must apologize in advance that it won't be an immediate response. Kay
On Mon, Aug 2, 2021 at 4:53 PM Austin Hurst @.***> wrote:
The second thing to notice here is that identifying "bad" channels is not exactly the same thing as getting the channels needed to compute a good reference. The phenomenon you mentioned one of the things that Prep is trying to accomplish --- not rejecting channels that shouldn't necessarily be rejected.
If this is in reference to my 3rd post about bad-by-SNR, I think I should clarify: what we observed was PREP rejecting a channel that shouldn't necessarily be rejected, since the 'bad-by-SNR' channel was no longer bad by any metric after the initial re-reference, but was still interpolated at the end because it was included in the list of "unusable" channels (which get interpolated no matter what). If the channel remains "good" after initial average referencing, and it's not inherently unusable in the same way that flat and NaN-containing channels are, wouldn't we want to retain it during final interpolation?
The initial set of channels only rejects channels that are "globally" bad --- that is have globally large amplitudes, poor overall correlation, or have signals with NANs or zeros
When we reimplemented this code for PyPREP, the MATLAB PREP code seems to only exclude bad-by-NAN, bad-by-flat-signal, and bad-by-SNR (i.e. bad by both high-frequency noise ratio and by having more than 1% low correlation windows). Were bad-by-deviation channels meant to be included in that initial set as well?
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/VisLab/EEG-Clean-Tools/issues/28#issuecomment-891358777, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJCJOQLJM2STX6MF7K3RO3T24HUVANCNFSM47IBGOZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .