moabb icon indicating copy to clipboard operation
moabb copied to clipboard

Investigate transfer learning

Open ErikBjare opened this issue 3 years ago • 9 comments

I'm working on my MSc thesis (https://github.com/ErikBjare/thesis) and I am investigating the possibility for training models on public datasets and then using the learned model to train for a different task on my smaller dataset with different equipment (I have a Muse S and a OpenBCI Cyton).

I was curious if anyone has investigated this type of thing before (I searched the issues for 'transfer learning', but no results). Someone (maybe @sylvchev?) mentioned being interested in it on a NeuroTechX Paris hacknight, and I'm creating this issue to follow up on that interest.

For reference, here's a recent review on transfer learning in EEG: http://www.sciencedirect.com/science/article/pii/S0925231220314223

ErikBjare avatar Nov 30 '20 18:11 ErikBjare

Thank you! Yes, we are currently working on an extension of MOABB to handle transfer learning and we could use this issue to sort the things out and exchange ideas. There are many works on the transfer for EEG, that is at a subject level for a given dataset. I started to work on the following idea.

  1. Separate one subject (target) from the other subjects (source) in a dataset (leave-one-subject-out),
  2. Separate samples from target into n_target_train and n_target_test
  3. Train a model on all the data available from the source subjects + n_target_train from the target subject
  4. Test model on n_target_test from the target subject

If n_target_train = 0, it is like CrossSubjectEvaluation. If n_target_train = n_sample (all available data for a given subject), it is almost like WithinSessionEvaluation (but with a model trained all subjects).

The n_target_train could be picked to ensure balanced classes. A "gold standard" could be indicated as the results of a WithinSessionEvaluation for the target subject. For one value of n_target_train, the analysis should be repeated for all possible subset of samples (in cross-validation fashion). Indeed, various n_target_train values should be tested and it should be done for all subjects.

sylvchev avatar Dec 03 '20 17:12 sylvchev

The cross-subject stuff is interesting, but my primary interest is still cross-device/setup (which seems like a much harder/less understood problem, and this is admittedly way over my head).

I heard @hubertjb mention his work on self-supervised learning (https://iopscience.iop.org/article/10.1088/1741-2552/abca18/meta), and I've found other interesting avenues like embeddings (through variational autoencoders and the like) and meta-learning.

A meta-learning approach was mentioned by someone else at the Paris Hacknight meetup (don't remember his name or know his GitHub handle). After searching for "EEG meta learning" I found this https://arxiv.org/abs/2003.06113 which seems interesting but not sure how well an approach like it will work for learning with different equipment, number of channels, electrode placements, etc.

What I want to do could probably be described as two steps:

  1. Train a model on multiple public datasets with different tasks, equipment/channel count/electrode placements, etc.
  2. Retrain the model with my own limited sample data on my own task.

Step 2 is really a cross-subject and cross-task transfer, which seems like it can be done from the literature, but to get there I'd need to solve step 1 first which seems like a tougher nut to crack.

However, if the problem was to be solved (assuming it can be to a reasonable degree), I think it could be a major improvement in what we can learn by combining all the public data out there (but again, I'm way in over my head).


Sidetrack: An approach I've considered, which seems to be common in some other ML domains, is to have a different model feed its classification to the model being trained. Say, let a focus/emotion/arousal classifier pass its results to my device activity classifier. This is kind of different to what I had in mind, but it seems a lot easier for me to attempt.

ErikBjare avatar Dec 03 '20 22:12 ErikBjare

I have a team working on the transfer objectives you mention. We're working with the physionet motor imagery dataset and some SSVEP datasets. Depending on your specific goals and skills, it might make sense for us to discuss further offline.

MHRosenberg avatar Dec 04 '20 07:12 MHRosenberg

@ErikBjare I've seen a paper that does what you propose there -- one difficult part would be that most of the canonical EEG tasks use very different feature sets. The paper I saw just concatenated all the different features that all the tasks tend to use but it would be interesting to see if one could find a generic feature set that's useful across ERP, MI, and SSVEP tasks

vinay-jayaram avatar Dec 04 '20 15:12 vinay-jayaram

I agree with @ErikBjare, the cross-dataset or cross-paradigm is a major game changer for BCI. There is already working approach on the subjects, there is Pedro Rodrigues & Marco Congedo's work [1] and we have develop an approach for handling different datasets [2]. As told @vinay-jayaram, building features for cross-paradigm task is very challenging!

For MOABB, it think it make sense to add a distinct evaluations function for cross-dataset evaluation and for cross-paradigm evaluation. @ErikBjare do you want to open a PR on the subject?

[1] Rodrigues, P., Congedo, M., & Jutten, C. (2020). Dimensionality transcending: a method for merging BCI datasets with different dimensionalities. IEEE Transactions on Biomedical Engineering. [2] Yger, F., Chevallier, S., Barthélemy, Q., & Sra, S. (2020, September). Geodesically-convex optimization for averaging partially observed covariance matrices. In Asian Conference on Machine Learning (pp. 417-432). PMLR.

sylvchev avatar Dec 05 '20 15:12 sylvchev

I just found your somewhat recent paper on Riemannian Transfer Learning @sylvchev, very nice! Will come in handy for my MSc thesis writing :)

ErikBjare avatar Oct 07 '21 17:10 ErikBjare

The concept explained in this video and mentionned by @Div12345 are very relevant :

  • Task-incremental learning: an algorithm must incrementally learn a set of clearly distinct tasks (the tasks are clearly distinct because the algorithm is always told which task it must perform)
  • Domain-incremental learning: an algorithm must learn the same kind of task but in different contexts or domains
  • Class-incremental learning: an algorithm must incrementally learn to distinguish between an increasing number of classes.

sylvchev avatar Jan 21 '22 09:01 sylvchev

image

I think the evaluation schemes that allow doing these as explained in the picture above would be a good addition for MOABB

Div12345 avatar Jan 21 '22 09:01 Div12345

Hi @ErikBjare and @sylvchev ,

Checking the status of old issues! If I understood correctly, the idea of this issue is to implement a cross-dataset evaluation?

bruAristimunha avatar Apr 18 '23 22:04 bruAristimunha