mne-python icon indicating copy to clipboard operation
mne-python copied to clipboard

basic eyetracker functionality

Open dominikwelke opened this issue 2 years ago • 26 comments

new raw class for eyetracking data with reader functions for various dataformats alignment and merge functionality with other MNE objects planned

dev and testing using data recorded with SR research eyelink 1000+

closes #10751

see also https://github.com/mne-tools/fiff-constants/pull/39

dominikwelke avatar Jun 28 '22 14:06 dominikwelke

hi all (@larson, @drammock )

here is some initial commits. i will contact you on discord

dominikwelke avatar Jun 28 '22 14:06 dominikwelke

@dominikwelke FYI I've just tested the current state of the PR with an environment that uses scipy==1.8.0 and it worked fine. So I think we can chalk up the previous failure to "something went wrong with your environment" and not worry about what exactly it was.

Also here is what I see when plotting: Screenshot_2022-06-30_09-26-16

I think we need to work on scalings next, so that the traces are a bit more intelligible. See here:

https://github.com/mne-tools/mne-python/blob/e49525c470b01b04c7270dc054bbfa7600021262/mne/defaults.py#L28-L42

drammock avatar Jun 30 '22 14:06 drammock

I think we need to work on scalings next, so that the traces are a bit more intelligible. See here:

sounds good! note that I still use the misc channel type so far, as the fif-pr https://github.com/mne-tools/fiff-constants/pull/39 is not merged yet. plus, i dont know what to change in the mne codebase, to add the new channel type. any pointer @drammock @larsoner ?

x-y coordinates are in screen pixels (in my case), but they might also come as cm or degrees of visual angle with other systems.. pupil area is in arbitrary values. not sure what other manufacturers put.

dominikwelke avatar Jun 30 '22 16:06 dominikwelke

i dont know what to change in the mne codebase, to add the new channel type.

Once https://github.com/mne-tools/fiff-constants/pull/39 is merged, then here:

https://github.com/mne-tools/mne-python/blob/386d4cf56551a248637c5291d5b1568dbcd26d1e/mne/io/constants.py#L182-L226

drammock avatar Jun 30 '22 16:06 drammock

Don't wait for fiff-constants. Change the lines at the top of test_constants.py to point to your fork + branch that is the same as the branch that you opened in the PR to fiff-constants

larsoner avatar Jun 30 '22 17:06 larsoner

Don't wait for fiff-constants. Change the lines at the top of test_constants.py to point to your fork + branch that is the same as the branch that you opened in the PR to fiff-constants

like this @larsoner ?

dominikwelke avatar Jun 30 '22 17:06 dominikwelke

like this @larsoner ?

Pull the tiny commit I just made then locally do:

pytest mne/io/tests/test_constants.py

and see if it passes -- let pytest tell you if it's correct :)

larsoner avatar Jun 30 '22 18:06 larsoner

nope, doesnt pass.. somethink seems wrong with the coil location, but i dont know where its set.

here the error log:

========================================================================================== FAILURES ===========================================================================================
_______________________________________________________________________________________ test_constants ________________________________________________________________________________________
mne/io/tests/test_constants.py:287: in test_constants
    assert val in fif[check], '%s: %s, %s' % (check, val, name)
E   AssertionError: coil: 402 (FIFFV_COIL_EYETRACK_PUPIL), FIFFV_COIL_EYETRACK_PUPIL
E   assert 402 (FIFFV_COIL_EYETRACK_PUPIL) in {0: ['none', 'The location info contains no data'], 1: ['eeg', 'EEG electrode position in r0'], 2: ['nm_122', 'Neuromag 122 coils'], 3: ['nm_24', 'Old 24 channel system in HUT'], ...}
------------------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------------------
Downloading data from 'https://codeload.github.com/dominikwelke/fiff-constants/zip/3da188c2e0d391bed1e4dd023eb07c909c273218' to file '/private/var/folders/fm/wcy0x7fs7f38_95zw5gj0r7073ffwq/T/pytest-of-dominik.welke/pytest-3/test_constants0/fiff.zip'.
SHA256 hash of downloaded file: 0c7f49a4d2194900390fbaa70ad80b0a3003fdc1b633db8593a673b8a9c44ffa
Use this value as the 'known_hash' argument of 'pooch.retrieve' to ensure that the file hasn't changed if it is downloaded again in the future.
------------------------------------------------ generated xml file: /Users/dominik.welke/Work/git_contributions/mne-python/junit-results.xml -------------------------------------------------
==================================================================================== slowest 20 durations =====================================================================================
1.07s setup    mne/io/tests/test_constants.py::test_constants
1.06s call     mne/io/tests/test_constants.py::test_constants

(18 durations < 0.005s hidden.  Use -vv to show these durations.)
=================================================================================== short test summary info ===================================================================================
FAILED mne/io/tests/test_constants.py::test_constants - AssertionError: coil: 402 (FIFFV_COIL_EYETRACK_PUPIL), FIFFV_COIL_EYETRACK_PUPIL
================================================================================= 1 failed, 6 passed in 2.58s =================================================================================

dominikwelke avatar Jun 30 '22 19:06 dominikwelke

In this PR you have:

FIFF.FIFFV_COIL_EYETRACK_POSX = 400      # Eye-tracking gaze X position
FIFF.FIFFV_COIL_EYETRACK_POSY = 401      # Eye-tracking gaze Y position
FIFF.FIFFV_COIL_EYETRACK_PUPIL = 402     # Eye-tracking pupil size

but you refer to commit 3da188c2e0d391bed1e4dd023eb07c909c273218:

https://github.com/dominikwelke/fiff-constants/commit/3da188c2e0d391bed1e4dd023eb07c909c273218

You need to keep these in sync, so changing to 0204967a529ae878ac730ead102a1967ef6c06f6 should fix it. But if you take my suggestion and just roll back to EYETRACK_POS rather than splitting x/y by coil type, then the test should pass as well.

larsoner avatar Jun 30 '22 20:06 larsoner

You need to keep these in sync

too obvious :D test passes now - i had to add the coil types to _missing_coil_def in test_constants.py hope this was right?

dominikwelke avatar Jul 01 '22 07:07 dominikwelke

@larsoner - i could also add relevant measurement units to the fif constants (px and deg). should i do that?

dominikwelke avatar Jul 01 '22 07:07 dominikwelke

@drammock

I think we need to work on scalings next, so that the traces are a bit more intelligible.

see last commit

dominikwelke avatar Jul 01 '22 08:07 dominikwelke

Yes, adding units makes sense

larsoner avatar Jul 01 '22 11:07 larsoner

Hi all, hope you don't mind me popping in here.

@christian-oreilly and I are preparing for an EEG-Eyetracking study with an integrated SR Eyelink / EGI (EEG) system. It's really awesome (and timely) to see that support for Eyelink data has been started here.

we cloned this branch and have been testing some of our internal piloting eyelink data with it. I would like to offer to help work on the development and documentation for this PR, if it's welcome. We have hit a couple snags so far when running our data through eyetrack.eyetrack.read_raw_eyelink, so may have some suggestions.

scott-huberty avatar Jul 28 '22 20:07 scott-huberty

I would like to offer to help work on the development and documentation for this PR, if it's welcome. We have hit a couple snags so far when running our data through eyetrack.eyetrack.read_raw_eyelink, so may have some suggestions.

@dominikwelke feel free to speak up if you're opposed, but to me this sounds like a great idea! Eventually if you both want to work simultaneously, you all could give each other write permissions to your forks, or just pull commits from each other's branches. @scott-huberty feel free to make a branch based on this branch and open a separate PR! (We can always close this one or the new one later once everything works in one branch or the other.)

larsoner avatar Jul 29 '22 22:07 larsoner

hi @scott-huberty great news, happy to hear youre interested to help! i didnt find much time to work on this after the sprint, so any push is welcome.

the code so far is definitely work in progress. documentation is non existent, as you realised :)

did you run into any problems or errors so far?

dominikwelke avatar Aug 01 '22 09:08 dominikwelke

hi @scott-huberty great news, happy to hear youre interested to help! i didnt find much time to work on this after the sprint, so any push is welcome.

the code so far is definitely work in progress. documentation is non existent, as you realised :)

did you run into any problems or errors so far?

No worries @dominikwelke , I knew that this was a feature in development, so had no unrealistic expectations for it to work for me "out of the box" : )

I hit a couple of small errors that were pretty easy to fix. It looks like the test data you used was in binocular mode. Our test data was collected using monocular mode + remote mode. the asc files will look slightly different depending on what mode was used, So it will be good to test the code with these different types of data, so we can catch the breakpoints and refactor the code where needed!

I'll create my own branch based on this branch, and give you more details on what I've changed, once it a bit more organized!

scott-huberty avatar Aug 01 '22 15:08 scott-huberty

Hi, I'm also very interested in this development. I saw that you created a new type of object called RawEyelink. It may make more sense to just have your read_raw_eyelink() function and have a general class for eyetracking data (something like RawEyetracking) as there are many different eyetrackers out there and not all of them have a file format per se. For instance, I use a Tobii eyetracker and that data can be saved within the Matlab or Python session to a mat file, pickle file, hdf5 file (as in Psychopy) and any other number of file formats. I can potentially also offer help in this.

nmarkowitz avatar Aug 13 '22 02:08 nmarkowitz

and have a general class for eyetracking data (something like RawEyetracking) as there are many different eyetrackers out there and not all of them have a file format per se

Well in theory (based on how BaseRaw works) RawEyelink should only subclass the __init__ and _read_segment_file methods. There shouldn't be anything else in the class, for example specific to eye tracking. This stuff should live in new helper functions for example in mne/preprocessing/eyetracking/*.py. So to me having RawEyelink for Eyelink data, then another class for some other manufacturer's data, etc. is a reasonable way to go.

larsoner avatar Aug 15 '22 13:08 larsoner

and have a general class for eyetracking data (something like RawEyetracking) as there are many different eyetrackers out there and not all of them have a file format per se

Well in theory (based on how BaseRaw works) RawEyelink should only subclass the __init__ and _read_segment_file methods. There shouldn't be anything else in the class, for example specific to eye tracking. This stuff should live in new helper functions for example in mne/preprocessing/eyetracking/*.py. So to me having RawEyelink for Eyelink data, then another class for some other manufacturer's data, etc. is a reasonable way to go.

Agreed. We could also differentiate the reader functions by revising themne/io/eyetrack/* directory to have a folder for each system (Eyelink, tobii, etc), where reader functions for those systesm exist, just as MNE does for EEG/MEG data.

@sportnoah14 , Thanks for reaching out! Question, are you collecting Tobii Eyetracking data simultaneously with EEG or MEG? IMO part of the motivation for building this feature for Eyelink systems, was to be able to co-register the eyetracking x/y gaze traces with EEG/MEG traces (SR Research Eyelink systems can be integrated with EEG or MEG systems, which some of us are using).

Anyways, I'm hoping we can have this branch more developed by the end of September. Maybe it makes more sense to start thinking about adding a reader for the Tobii system at that point, when we'll hopefully have a more solid template for the Eyetracking class in MNE?

scott-huberty avatar Aug 15 '22 14:08 scott-huberty

So to me having RawEyelink for Eyelink data, then another class for some other manufacturer's data, etc. is a reasonable way to go.

yes @sportnoah14 , i simply followed the MNE design for other data types to have manufacturer specific RawClasses. as @larsoner already said, these classes arent much more than differently labelled shells to package array data loaded with helper functions, in this case read_raw_eyelink.

..but actually, i shared your intuition and had even started the codebase using a generic class.. you can still see it in the well maintained docstring :D

    Returns
    -------
    raw : instance of RawEyetrack
        A Raw object containing eyetracker data.

We could also differentiate the reader functions by revising the mne/io/eyetrack/* directory to have a folder for each system (Eyelink, tobii, etc), where reader functions for those systesm exist, just as MNE does for EEG/MEG data.

yes, @scott-huberty - shifting code around might make sense as soon as things are a bit more settled and functionality grows. for a start i thought the amount of io-code would be reasonably small to stay in one file (io/eyetrack/eyetrack.py), even if we add additional manufacturer specific functions and classes like read_raw_tobii / RawTobii.

for the API it doesn't matter as the users dont need to know where code sits and already import the reader function as from mne.io import read_raw_eyelink. eyetracking specific preprocessing functions sit in mne.preprocessing.eyetracking - these are supposed to work with future additional eyetracker classes too


so tl;dr - happy to reorganize code when it makes sense, but right now i think we can focus on the functionality :)

dominikwelke avatar Aug 16 '22 15:08 dominikwelke

Anyways, I'm hoping we can have this branch more developed by the end of September. Maybe it makes more sense to start thinking about adding a reader for the Tobii system at that point, when we'll hopefully have a more solid template for the Eyetracking class in MNE?

@sportnoah14 - i think if you were motivated there's no need to wait and you could already start writing a reader function for your files (read_raw_tobii ) .

the function would need to extract:

  • the sample data (x, y, pupil..), as numpy.array
  • events/annotations (i guess there are some?) for synchronization etc
  • and relevant recording info from the header, s.a. sampling frequency etc.

these things are the minimum we need to generate a raw object from the data :)

dominikwelke avatar Aug 16 '22 15:08 dominikwelke

@dominikwelke I could start writing something a basic io function. Though I don't save tobii data in its native format (if it has one) but rather save it through Matlab or Python (Psychopy) sdk. So I can start by creating a class to handle data from tobii. Many labs do this with tobii that run psychophysics experiments. Maybe one of the most useful things to do would be to start creating an io function for eyetracking data saved via Psychopy as it saves the data in a standardized file format and handles many different types of eyetrackers.

I also think having a generic class would be great for eyetracking and everything could be built from that as there are many types and companies for eyetracking and some record additional things besides xy position on screen and pupil size.

@scott-huberty I'm recording eyetracking simultaneously with eeg data. However, it isn't recorded on the same acquisition system so they have to be synced by TTL pulses or some other method first.

@larsoner, question for you. Eyetrackers sometimes encounter an error and may not record data for a few seconds leading to unevenly sampled data. What's the best way to handle unevenly sampled data? I.e. data in which it's better to use timestamps rather than a start time and sampling rate (which is assumed to be consistent during the whole recording).

I've been looking into eyetracking functions and this package may be a good reference for types of functions to incorporate and how to write them https://github.com/titoghose/PyTrack .

nmarkowitz avatar Aug 17 '22 18:08 nmarkowitz

Eyetrackers sometimes encounter an error and may not record data for a few seconds leading to unevenly sampled data. What's the best way to handle unevenly sampled data? I.e. data in which it's better to use timestamps rather than a start time and sampling rate (which is assumed to be consistent during the whole recording).

We have no capability to handle unevenly sampled data in our base classes, and adding it would probably be a pain. I would use some simple resampling (e.g., linear interp) to resample to an even sample rate.

larsoner avatar Aug 17 '22 18:08 larsoner

hi @scott-huberty et al.

sorry for being absent, i had conference travel and other commitments. ill try to set aside some time to cowork on this feature in the future :)

dominikwelke avatar Sep 20 '22 10:09 dominikwelke

hi @scott-huberty et al.

sorry for being absent, i had conference travel and other commitments. ill try to set aside some time to cowork on this feature in the future :)

No worries, I hope you enjoyed your travels!

scott-huberty avatar Sep 20 '22 12:09 scott-huberty

Closing for #11152

larsoner avatar Mar 16 '23 18:03 larsoner