spikeinterface
spikeinterface copied to clipboard
Allow `run_sorter` to accept dicts
Allows run_sorter to accept dictionaries of recordings, and return a dict of sorters.
Longer-term aim, make this workflow just work:
pp_recs = si.common_reference(rec.split_by(‘group’))
sortings = si.run_sorter(pp_rec)
analyzer = si.create_sorting_analyzer(sortings, pp_recordings)
Think this PR is minimally invasive. The old function run_sorter can now accept a dict. If it gets a dict, it calls run_sorter for each recording and saves in a folder
{folder}/
spikeinterface_info.json
splitting_key_0/
{whatever run_sorter spits outs}
splitting_key_1
{whatever run_sorter spits outs}
…
And spikeinterface_info.json looks like e.g.:
{
"version": "0.102.4",
"dev_mode": true,
"object": "Group[SorterOutput]",
"dict_keys": [
"a",
5
],
}
Tried a bigger refactor here: https://github.com/chrishalcrow/spikeinterface/tree/sort-by-group which avoids the recurrent logic here, but it was a mess. This seems a lot nicer.
@chrishalcrow maybe worth saving the dict_keys dtype in the spikeinterface_info since this gets lost when dumping to json
Thank you Chris, this is cool.
We also need to implement the spikeinterface.load that handle to open this new folder type.
We should discuss a bit more the "object": "dict of BaseSorting" I add in mind something like "Group[Sorting]"
Also not sure to clearly understand why we need this "split_by_property" in the dict because the information is dump in all sub-folder in recording.json no ?
@chrishalcrow maybe worth saving the
dict_keysdtype in the spikeinterface_info since this gets lost when dumping to json
Not sure I like this ugly hack in the format spikeinterface_info.json. List of keys can be int or str no ? json handle it. The list is in the dict to open back the sub folder in the good order and will converted to str for folder. So I do not see why.
I would be happy to have a call.
bien joué