spikeinterface icon indicating copy to clipboard operation
spikeinterface copied to clipboard

spike sorter tridesclous error

Open Zoe0793 opened this issue 2 years ago • 3 comments

Hello, I am implementing SI to run tridesclous and spyking circus on tetrode data. I have a .dat file and the channels are stored in order of consecutive tetrodes [ 0 1 2 .. 127]. However, when running ‘BinaryRecordingExtractor’ it is read in order [ 0 1 64 65 … 62 63 126 127], i.e. it skips every two channels and then loops back around.

I understood that I could simply use set_global_device_channel_indicies to rewire my probe groups to account for this, but this has led to some confusion.

If I use a standard tetrode map, for e.g using generate_tetrode and then rewire, channels 0 1 64 65 are successfully grouped into group 1, but there geometry is still far in space such that 0 1 are far from 64 and 65. This is based on the on probe view in phy and also channel_map.npy and channel_map_si.npy in phy output folder is still 0 1 2 … 127 and channel_positions.npy do not change.

I think this is effecting the sorting process as I get errors in the sorter_by_group on particular groups (e.g. group 6) of (spikeinterface_log.json) :

{ "sorter_name": "tridesclous", "sorter_version": "1.6.5", "datetime": "2022-08-25T13:13:19.320525", "runtime_trace": [], "error": true, "error_trace": "Traceback (most recent call last):\n File "C:\Users\Analysis\anaconda3\envs\si_env1\lib\site-packages\spikeinterface\sorters\basesorter.py", line 200, in run_from_folder\n SorterClass._run_from_folder(output_folder, sorter_params, verbose)\n File "C:\Users\Analysis\anaconda3\envs\si_env1\lib\site-packages\spikeinterface\sorters\tridesclous\tridesclous.py", line 149, in _run_from_folder\n tdc.apply_all_catalogue_steps(cc, catalogue_nested_params, verbose=verbose)\n File "C:\Users\Analysis\anaconda3\envs\si_env1\lib\site-packages\tridesclous\cataloguetools.py", line 125, in apply_all_catalogue_steps\n cc.cache_some_waveforms()\n File "C:\Users\Analysis\anaconda3\envs\si_env1\lib\site-packages\tridesclous\catalogueconstructor.py", line 979, in cache_some_waveforms\n selected_indexes = np.concatenate(selected_indexes)\n File "<array_function internals>", line 5, in concatenate\nValueError: need at least one array to concatenate\n", "run_time": null }

I can get around this either reduce detect_threshold / increase adjacency radius. However, if I do the latter I notice on phy that channels are associated with multiple group numbers which should not be the case.

Can you help with this? Perhaps if I can read my original file in the appropriate order I can bypass this, but also it would be useful to know how to re-wire successfully.

Thank-you for your time, Zoe

Zoe0793 avatar Aug 25 '22 18:08 Zoe0793

I have checked again and I am getting the error stated above regardless of which parameters I set. I therefore want to just ask specifically about this issue:

{ "sorter_name": "tridesclous", "sorter_version": "1.6.5", "datetime": "2022-08-25T13:13:19.320525", "runtime_trace": [], "error": true, "error_trace": "Traceback (most recent call last):\n File "C:\Users\Analysis\anaconda3\envs\si_env1\lib\site-packages\spikeinterface\sorters\basesorter.py", line 200, in run_from_folder\n SorterClass._run_from_folder(output_folder, sorter_params, verbose)\n File "C:\Users\Analysis\anaconda3\envs\si_env1\lib\site-packages\spikeinterface\sorters\tridesclous\tridesclous.py", line 149, in _run_from_folder\n tdc.apply_all_catalogue_steps(cc, catalogue_nested_params, verbose=verbose)\n File "C:\Users\Analysis\anaconda3\envs\si_env1\lib\site-packages\tridesclous\cataloguetools.py", line 125, in apply_all_catalogue_steps\n cc.cache_some_waveforms()\n File "C:\Users\Analysis\anaconda3\envs\si_env1\lib\site-packages\tridesclous\catalogueconstructor.py", line 979, in cache_some_waveforms\n selected_indexes = np.concatenate(selected_indexes)\n File "<array_function internals>", line 5, in concatenate\nValueError: need at least one array to concatenate\n", "run_time": null }

Zoe0793 avatar Aug 30 '22 21:08 Zoe0793

Hi Zoe, sorry for the delay.

My guess is that you need to solve properly you channel mapping. set_global_device_channel_indicies is an important step you need set properly.

COuld you send us your code for reading your dataset and set the probe (aka channel mapping). And alspo a google table with the channel mapping ? Then I could help you.

We are about to release so the no so responssive theses days.

samuelgarcia avatar Aug 31 '22 06:08 samuelgarcia

Hi Samuel, No problem, I just wanted to give the most up to date info. Here is example with data read via read_openephys as this keeps the mapping I used during DAQ so no rewiring is needed, i.e. channel map would be [0...127], however the tridesclous fails again on a particular group, the code is:

import spikeinterface as si  # import core only
import spikeinterface.extractors as se
import spikeinterface.sorters as ss
import spikeinterface.comparison as sc
import spikeinterface.widgets as sw

import spikeinterface.full as si

from probeinterface import Probe, ProbeGroup, generate_tetrode
from probeinterface.plotting import plot_probe_group, plot_probe

import numpy as np

from spikeinterface.exporters import export_to_phy

recording = se.read_openephys(r'E:\Rat117\03_15\2022-03-15_15-29-42', stream_id='2')
print(recording.get_channel_ids())                                                

probegroup=ProbeGroup()
for i in range (32) :
    x = [ i * 30, i *30, i * 30, i *30]
    y= [ ((i+1)*150),  ((i+1)*150+10),  ((i+1)*150+20),  ((i+1)*150+30) ]
    positions = np.zeros((4,2))
    positions [:,0] = x
    positions [:,1] = y
    
    tetrode = Probe(ndim=2, si_units='um')
    tetrode.set_contacts(positions=positions, shapes='circle', shape_params={'radius': 12})
    probegroup.add_probe(tetrode)
    # plot_probe(tetrode)

probegroup.set_global_device_channel_indices(np.arange(128))
df = probegroup.to_dataframe()
print (df) 

recording = recording.set_probegroup(probegroup, group_mode='by_probe')   
recording.annotate(is_filtered=True)
print(recording.get_channel_groups())

tdc_sorter_params = {"freq_min":300, 
                     "freq_max": 6000, 
                     "detect_threshold":5, 
                     "nested_params" : {'peak_detector': {'adjacency_radius_um': 50} } ,
                     "total_memory": '16G'}

folder= r'C:\Users\Analysis\Documents\Code\SpikeInterface\TestReadOEP\tdc_by_group2'

sorting_TDC = si.run_sorter_by_property("tridesclous", recording, grouping_property="group",
                                        working_folder=folder,  **tdc_sorter_params)

and the json log file states:

{
    "sorter_name": "tridesclous",
    "sorter_version": "1.6.5",
    "datetime": "2022-08-31T15:49:18.192959",
    "runtime_trace": [],
    "error": true,
    "error_trace": "Traceback (most recent call last):\n  File \"C:\\Users\\Analysis\\anaconda3\\envs\\si_env1\\lib\\site-packages\\spikeinterface\\sorters\\basesorter.py\", line 200, in run_from_folder\n    SorterClass._run_from_folder(output_folder, sorter_params, verbose)\n  File \"C:\\Users\\Analysis\\anaconda3\\envs\\si_env1\\lib\\site-packages\\spikeinterface\\sorters\\tridesclous\\tridesclous.py\", line 149, in _run_from_folder\n    tdc.apply_all_catalogue_steps(cc, catalogue_nested_params, verbose=verbose)\n  File \"C:\\Users\\Analysis\\anaconda3\\envs\\si_env1\\lib\\site-packages\\tridesclous\\cataloguetools.py\", line 146, in apply_all_catalogue_steps\n    cc.auto_split_cluster()\n  File \"C:\\Users\\Analysis\\anaconda3\\envs\\si_env1\\lib\\site-packages\\tridesclous\\catalogueconstructor.py\", line 1509, in auto_split_cluster\n    cleancluster.auto_split(self, n_spike_for_centroid=self.n_spike_for_centroid, n_jobs=self.n_jobs, **kargs)\n  File \"C:\\Users\\Analysis\\anaconda3\\envs\\si_env1\\lib\\site-packages\\tridesclous\\cleancluster.py\", line 181, in auto_split\n    inds, = np.nonzero(pvals<pval_thresh)\nTypeError: '<' not supported between instances of 'NoneType' and 'float'\n",
    "run_time": null
}

I have also split the recording to remove bad (tetrodes with no spikes), but I still run into the same error Thanks for your time, Zoe

Zoe0793 avatar Aug 31 '22 22:08 Zoe0793

@Zoe0793 do you still have this issue?

alejoe91 avatar Jun 12 '23 14:06 alejoe91

No I don't have this issue anymore

On Mon, 12 Jun 2023, 10:46 Alessio Buccino, @.***> wrote:

@Zoe0793 https://github.com/Zoe0793 do you still have this issue?

— Reply to this email directly, view it on GitHub https://github.com/SpikeInterface/spikeinterface/issues/912#issuecomment-1587495090, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMB6AJ77MCIJ2AE4VHRVECDXK4TSVANCNFSM57UECSAA . You are receiving this because you were mentioned.Message ID: @.***>

Zoe0793 avatar Jun 12 '23 14:06 Zoe0793