mne-python icon indicating copy to clipboard operation
mne-python copied to clipboard

New Code Proposal: Add `ARMBR` blink artifact removal method to `mne.preprocessing`

Open ludvikalkhoury opened this issue 6 months ago • 3 comments

Describe the new feature or enhancement

I propose adding a new blink artifact removal method called Artifact-Reference Multivariate Backward Regression (ARMBR) to the mne.preprocessing module.

Key features: -Removes blinks via multivariate linear regression using a binarized blink reference -Works with minimal training data -Requires no EOG channels -Supports both offline and real-time/BCI pipelines -Fully integrated with MNE: .fit(), .apply(), and .plot() methods using Raw objects

This method is described in the paper: Alkhoury, L., Scanavini, G., Louviot, S., Radanovic, A., Shah, S. A., & Hill, N. J. (2025). Artifact-reference multivariate backward regression (ARMBR): A novel method for EEG blink artifact removal with minimal data requirements. Journal of Neural Engineering, 22(3), 036048. https://doi.org/10.1088/1741-2552/ade566

The code is MNE-compatible and includes: -A class ARMBR for MNE pipelines -Full test coverage using mne.datasets.sample -Documentation and plotting utilities -Example scripts for integration

I am happy to submit this toolbox as a PR for inclusion in mne.preprocessing.

Thanks for considering this contribution!

Describe your proposed implementation

The proposed feature would be implemented as a new class called ARMBR, located in the mne.preprocessing module (e.g., mne/preprocessing/armbr.py), similar to the structure used for ICA and rASR.

The class would expose the following methods: -.fit(raw, ...): trains the projection matrix using EEG segments from raw data -.apply(raw, ...): applies blink artifact suppression to the raw data in-place -.plot(): visualizes EEG on before/after suppression -.plot_blink_patterns(): visualizes spatial blink components via topomap

In addition: -The class is fully compatible with MNE Raw objects and uses Annotations (e.g., armbr_fit) to define training segments. -The underlying core function run_armbr() will be included in the same module for modular use. -A dedicated test file (test_armbr.py) using MNE's sample dataset is included.

Let me know if the maintainers prefer a different naming convention or integration strategy.

Describe possible alternatives

A similar implementation is available in the GitHub repository (https://github.com/S-Shah-Lab/ARMBR) that accompanied the original publication of the paper. The proposed integration refactors that version to align with MNE’s coding standards, interface conventions, and documentation style.

Additional context

No response

ludvikalkhoury avatar Jul 15 '25 21:07 ludvikalkhoury

Hello! 👋 Thanks for opening your first issue here! ❤️ We will try to get back to you soon. 🚴

welcome[bot] avatar Jul 15 '25 21:07 welcome[bot]

Hi, thanks for offering to add your work to MNE-Python! You may have already seen the discussion when we considered adding ASR to MNE-Python, but in case not: https://github.com/mne-tools/mne-python/pull/9302#issuecomment-1287323951. Also, a similar discussion around adding LOF, esp this and this and this

TL;DR: we have a pretty high threshold for adding new preprocessing algorithms, because:

  1. MNE is already a big package with more work to do than available maintainer time
  2. to some degree, inclusion in MNE acts as a "stamp of approval" that gives users confidence in a method, so we want to be certain that this is warranted. Otherwise, we risk users applying new algorithms using whatever the default settings are, even if it's not scientifically appropriate given their data/context.

The upshot is that we tend to require at least some of the following:

  • the method is published in a peer-reviewed context (this one is a "hard" requirement)
  • the method is "widely used"
  • the method is demonstrably better than what is already available in MNE, at least in certain contexts (e.g., "works really well for infant data" or "works even when there is very little data, where other methods would fail catastrophically" or "works on raw data, doesn't require epoching"... hopefully you get the idea)
  • one of our maintainers understands the method thoroughly, and commits to maintaining the code and examples for it
  • whatever parameterizations exist for an algorithm are clearly explained and demonstrated; sensible defaults exist that will work for most data; notable deviations from the default are mentioned (for example, maxwell filter's internal order defaults to 8, but we recommend in the docs to use 6 as a starting point for infant data).

This doesn't mean we can't add ARMBR to MNE! But since it's newly published, the "widely used" criterion won't be met (yet), so that means we'll want to see very clear evidence of superior performance in at least some contexts/data types. As far as maintenance, someone on our maintainer team will need to read the paper and the code, but a very clear example/tutorial (e.g. in a notebook/colab/blog post) that explains how the method works can go a long way toward speeding that up.

Finally: adding to MNE-Python is not the only option! The method could start in MNE-Incubator (where we relax some of the criteria above) and maybe migrate to MNE-Python later if it proves to be popular / widely used. Sorry for the long post; happy to continue the discussion here if you have any questions/comments/replies.

drammock avatar Jul 16 '25 16:07 drammock

In our ARMBR paper, we showed that the proposed method outperforms both SSP and regression-based approaches currently available in MNE, for the dataset we tested in the paper, especially in low-data scenarios. That said, we fully understand and respect MNE's policies, and will keep our implementation of ARMBR MNE-compatible in our repository: https://github.com/S-Shah-Lab/ARMBR.

ludvikalkhoury avatar Jul 16 '25 18:07 ludvikalkhoury