spikeinterface
spikeinterface copied to clipboard
"Index exceeds matrix dimension" kilosort2 failure + General question on directory/subdirectory setup
Trying to run kilosort2 on jupyter notebook setup for spike interface version 0.99.1, but getting an "index exceeds matrix dimension error" from my kilosort.
Running the same dataset (openephys .dat) via the same kilosort GUI, nothing fails.
This is the code running the kilosort:
recording_f = si.bandpass_filter(fullRecording_WT_LFP, freq_min=300, freq_max=1249) #freqs for LFP
recording_cmr = si.common_reference(fullRecording_WT_LFP, reference='global', operator='median')
fs = fullRecording_WT_LFP.get_sampling_frequency()
num_frames = fullRecording_WT_LFP.get_num_frames()
channels = fullRecording_WT_LFP.get_num_channels()
print(channels)
print(num_frames)
print(fs)
# recordingSeconds = recording_cmr.frame_slice(start_frame=0*fs, end_frame=100*fs)
recordingSeconds = fullRecording_WT_LFP.frame_slice(start_frame=0*fs, end_frame=100*fs)
# recordingSeconds = fullRecording_WT_LFP
ss.Kilosort2Sorter.set_kilosort2_path(r"H:\transfer_raster\Kilosort2-master")
secondsSorted = si.run_sorter('kilosort2', recordingSeconds,remove_existing_folder=True,output_folder=r'H:\results', verbose=True)
Looking online I see that this could be due to kilosort not reading the channels, but asking spikeinterface to print num_channels() proves that spikeinterface is giving kilosort the correct data set.
However, this could also be due to the way that I am extracting my openephys data from the folder. Does anyone have a method to simply choose individual continuous.dat files without having to go through this trouble.
My standard directory tree looks like this: 2023 --> Record Node 101 --> (experiment 1 | experiment 2) --> (recording1 | recording2 | recording3) --> (continuous | events) --> (Neuropix-PXI-100.ProbeA-AP | Neuropix-PXI-100.ProbeA-LFP) --> continous.dat
Here is how I am doing it with spikeinterface:
#extract the recording we need
recordingExtractsWT_LFP = se.read_openephys(recording_file,block_index=0,stream_name="Record Node 101#Neuropix-PXI-100.ProbeA-LFP")
recordingExtractsHOM_LFP = se.read_openephys(recording_file,block_index=1,stream_name="Record Node 101#Neuropix-PXI-100.ProbeA-LFP")
#action potentials
recordingExtractsWT_AP = se.read_openephys(recording_file,block_index=0,stream_name="Record Node 101#Neuropix-PXI-100.ProbeA-AP")
recordingExtractsHOM_AP = se.read_openephys(recording_file,block_index=1,stream_name="Record Node 101#Neuropix-PXI-100.ProbeA-AP")
#WT
wtBaseline_LFP = si.select_segment_recording(recordingExtractsWT_LFP, segment_indices=0)
wtPTZ_LFP = si.select_segment_recording(recordingExtractsWT_LFP, segment_indices=1)
wtDeadPTZ_LFP = si.select_segment_recording(recordingExtractsWT_LFP, segment_indices=2)
wtBaseline_AP = si.select_segment_recording(recordingExtractsWT_AP, segment_indices=0)
wtPTZ_AP = si.select_segment_recording(recordingExtractsWT_AP, segment_indices=1)
wtDeadPTZ_AP = si.select_segment_recording(recordingExtractsWT_AP, segment_indices=2)
fullRecording_WT_LFP = si.concatenate_recordings([wtBaseline_LFP,wtPTZ_LFP])
fullRecording_WT_LFP = wtBaseline_LFP
fullRecording_WT_AP = si.concatenate_recordings([wtBaseline_AP,wtPTZ_AP])
#HOM
HOMBaseline_LFP = si.select_segment_recording(recordingExtractsHOM_LFP, segment_indices=0)
HOMPTZ_LFP = si.select_segment_recording(recordingExtractsHOM_LFP, segment_indices=1)
HOMPTZ2_LFP = si.select_segment_recording(recordingExtractsHOM_LFP, segment_indices=2)
HOMBaseline_AP = si.select_segment_recording(recordingExtractsHOM_AP, segment_indices=0)
HOMPTZ_AP = si.select_segment_recording(recordingExtractsHOM_AP, segment_indices=1)
#HOMPTZ2_AP = si.select_segment_recording(recordingExtractsHOM_LFP, segment_indices=2)
fullRecording_HOM_LFP = si.concatenate_recordings([wtBaseline_LFP,wtPTZ_LFP,wtDeadPTZ_LFP,HOMBaseline_LFP,HOMPTZ_LFP])
fullRecording_HOM_AP = si.concatenate_recordings([wtBaseline_AP,wtPTZ_AP,wtDeadPTZ_AP,HOMBaseline_AP,HOMPTZ_AP])
any thoughts would be great. thank you!