supereeg icon indicating copy to clipboard operation
supereeg copied to clipboard

plots for brain

Open lucywowen opened this issue 7 years ago • 7 comments

it’d also be nice to have some way of knowing for the reconstructed brain objects (with nearest neighbor matching) which electrodes were matched with which locations in the model, and how many were used.

e.g. you could attach two additional locations matrices to the Brain object returned by predict that get used by plot_locs() (one for the original brain locations and one for the model locations)

Then plot_data could plot:

  • black electrodes: locations in original dataset
  • red electrodes: nearest neighbor matches to the original locations (connect with lines?)
  • blue electrodes: locations in model

lucywowen avatar Feb 09 '18 20:02 lucywowen

We could use an index array that says which electrodes/locations came from which locations.

jeremymanning avatar Mar 21 '18 20:03 jeremymanning

check out the mind tutorial

andrewheusser avatar Mar 21 '18 20:03 andrewheusser

Currently, the observed locations are colored in black and the model locations are in red. I haven't delineated between nearest neighbor and original locations. I think it would require a second locations matrix that would be initialized during the predict function. Should that second location matrix be a parameter of a brain object?

lucywowen avatar Mar 26 '18 16:03 lucywowen

If you know the observed locations and the model locations, can't you determine the nearest neighbor locations? You'll need to store the threshold somewhere in the model object as well.

jeremymanning avatar Mar 26 '18 17:03 jeremymanning

Currently, the nearest neighbor flag finds the nearest neighbor and uses those coordinates as 'observed' locations. So the resulting brain object reflects the locations used for the prediction. That seemed like the cleanest way to do it, but it'll need to be refactored if you want the original brain object locations stored as well.

lucywowen avatar Mar 26 '18 17:03 lucywowen

How about we store the original locations + data in the brain object, along with enough info to do the nearest neighbor calculations on the fly. For example, if we add a mapped_locations field to the brain object (set to None by default and not accessible to the user), it could work as follows:

  • In all of the toolbox functions, make sure we always use bo.get_locs() to retrieve the locations, rather than calling bo.locs directly. (To be safe, make bo.locs inaccessible to the user.)
  • Similarly, make sure we always use bo.get_data() to retrieve the data, rather than calling bo.data directly. (To be safe, make bo.data inaccessible to the user.)
  • If mapped_locations is None, then bo.get_locs() should return bo.locs and bo.get_data() should return bo.data, other than any locations that didn't pass the Kurtosis filtering.
  • When the nearest neighbor option(s) are used, that should set the mapped_locations to a new matrix of size number-of-electrodes by 3. All of the rows should either be locations in the model object (i.e. the nearest neighbors of the original locations), or if thresholding is used, then the corresponding rows should be set to nans.
  • When mapped_locations is not None, this has the following impacts on bo.get_locs() and bo.get_data():
    • bo.get_locs() should return the non-nan rows of mapped_locations
    • bo.get_data() should return the columns of bo.data that correspond to non-nan rows of bo.mapped_locations. In other words, the rows that are nans in bo.mapped_locations should be masked out (along with any locations that didn't pass the Kurtosis filtering).

jeremymanning avatar Mar 27 '18 02:03 jeremymanning

let's punt the precise implementation of this for now; we have a hacked version in place that:

  • plots observed (or rounded to observed) locations in black
  • plots reconstructed locations in red

jeremymanning avatar Mar 28 '18 19:03 jeremymanning