eeg_mi_dl
eeg_mi_dl copied to clipboard
A research repository of deep learning on electroencephalographic (EEG) for Motor imagery(MI), including eeg data processing(visualization & analysis), papers(research and summary), deep learning mode...
EEG Motor Imagery Deep Learning
English 中文
A research repository of deep learning on electroencephalographic (EEG) for Motor imagery(MI), including eeg data processing(visualization & analysis), papers(research and summary), deep learning models(reproduction and experiments).
The experiments in this repository are based on MNE-Python, Moabb, Braindecode and Skorch.
You can find more contents of this repository through the following sections:
- Paper Researching
- EEG Data Analysis and Processing
- Experiments
Field Researching
Field Researching currently includes the following papers and datasets:
- Awesome Papers
- Public Datasets
Awesome Papers
2017 Schirrmeister et al. Deep learning with convolutional neural networks for EEG decoding and visualization [paper link] [source code] [reproduce1]
2018 Lawhern et al. EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces [paper link] [source code] [reproduce1] [reproduce2]
2018 Sakhavi et al. Learning Temporal Information for Brain-Computer Interface Using Convolutional Neural Networks [paper link]
2019 Dose et al. An end-to-end deep learning approach to MI-EEG signal classification for BCIs [paper link] [source code]
2020 Wang et al. An Accurate EEGNet-based Motor-Imagery Brain Computer Interface for Low-Power Edge Computing [paper link] [source code]
2020 Ingolfsson et al. EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces [paper link] [source code] [reproduce1]
2021 Mane et al. A Multi-view CNN with Novel Variance Layer for Motor Imagery Brain Computer Interface [paper link] [source code]
2022 Hamdi Altaheri et al. Physics-Informed Attention Temporal Convolutional Network for EEG-Based Motor Imagery Classification [paper link] [source code]
Public Datasets
List of the most frequently used public datasets in the papers
- BCI IV 2a(BCI Competition IV)
Dataset description: BCI Competition 2008 – Graz data set A 4 classes
Download link: .gdf format or .mat format
- Physionet(Physionet Dataset)
Dataset description: Physionet Database EEG Motor Movement/Imagery Dataset 2/3/4 classes
Download link: .edf format
For deep learning experiments, to easier downloading of datasets and faster data processing, it is recommended to use Moabb dataset or Braindecode dataset to do experiments.
EEG Data Analysis and Processing
- Data Load and Analysis
Using MNE-Python library with Jupyter Notebook to analyze demo EEG data of BCI IV 2a, including loading data, plotting signal, extracting events...
For details and code, please move to data_load_visualization.ipynb, for more examples, to MNE-Python tutorials.
- Data Processing
Using MNE-Python library with Jupyter Notebook to process demo EEG data of BCI IV 2a, including filtering, resampling, segmenting data ...
For details and code, please move to data_processing.ipynb, for more examples, to MNE-Python tutorials.
Experiments
This repo is based on Python 3.10
. And before you run experiments of this repo, install the environment first:
$ pip install -r requirements.txt
Then you can use -h
to get usage:
$ python .\main.py -h
usage: main.py [-h] [--dataset {bci2a,physionet}] [--model {EEGNet,EEGConformer,ATCNet,EEGInception,EEGITNet}] [--config CONFIG] [--strategy {cross-subject,within-subject}] [--save]
optional arguments:
-h, --help show this help message and exit
--dataset {bci2a,physionet}
data set used of the experiments
--model {EEGNet,EEGConformer,ATCNet,EEGInception,EEGITNet}
model used of the experiments
--config CONFIG config file name(.yaml format)
--strategy {cross-subject,within-subject}
experiments strategy on subjects
--save save the pytorch model and history (follow skorch)
If you run experiments on dataset BCI 2a
dataset, using EEGNet
model, simply run:
$ python .\main.py --dataset bci2a --model EEGNet
It will use the default config in bci2a_EEGNet_default.yaml and default within-subject
strategy , surely you can use --config
to specify.Then you can get the output accuracy and result.log
in ./save
folder:
[2024-07-30 17:30:51] Subject1 test accuracy: 70.4861%
[2024-07-30 17:32:17] Subject2 test accuracy: 53.8194%
[2024-07-30 17:33:40] Subject3 test accuracy: 79.1667%
[2024-07-30 17:35:02] Subject4 test accuracy: 62.8472%
[2024-07-30 17:36:24] Subject5 test accuracy: 68.4028%
[2024-07-30 17:39:13] Subject6 test accuracy: 50.6944%
[2024-07-30 17:40:35] Subject7 test accuracy: 72.2222%
[2024-07-30 17:42:00] Subject8 test accuracy: 64.5833%
[2024-07-30 17:43:23] Subject9 test accuracy: 70.1389%
[2024-07-30 17:43:23] Average test accuracy: 65.8179%
If you run experiments on dataset Physionet
dataset, using cross-subject
strategy:
python .\main.py --dataset physionet --model EEGNet --strategy cross-subject
You can also modify the config yaml file to adjust parameter or make your own models to do experiments.