spikeinterface icon indicating copy to clipboard operation
spikeinterface copied to clipboard

spykingcircus2 Insufficient system resources exist to complete the requested service

Open chiyu1203 opened this issue 7 months ago • 6 comments

Dear Spikeinterface community, I am using spykingcircus2 to do spike-sorting. After applying bandpass filter, motion correction, whitening to the raw data on spikeinterface, I checked the result on phy and felt that the default settings in spykingcircus2 seemed to work well with a single-shank (64 channels). However, when I switched to multple shank (96 channels) recording, I bumped into this error

OSError: [WinError 1450] Insufficient system resources exist to complete the requested service

It looks like the issue is with the computer. However, I can apply kilosort4 to this dataset on Spikeinterface without any problem. Does anyone know what issue I might have with spykingcircus2? Below is my code and the log.json

            rec_of_interest=get_preprocessed_recording(oe_folder,analysis_methods)
            recording_saved=spre.astype(recording_saved,np.float32)
            recording_corrected_dict=motion_correction_shankbyshank(recording_saved,oe_folder,analysis_methods)

            if len(recording_corrected_dict)>1:
                recording_corrected=recording_corrected_dict
            else:
                recording_corrected=recording_corrected_dict[0]            
            rec_for_sorting = spre.whiten(
                recording=recording_corrected,
                mode="local",
                radius_um=150,
                #int_scale=200,#this can be added to replicate kilosort behaviour
            )
            print(f"run spike sorting with {this_sorter}")
            sorter_params = ss.get_default_sorter_params(this_sorter)

            sorter_params.update({"apply_motion_correction": False,"apply_preprocessing": False})
            sorter_params['general'].update({"radius_um":150})
            if len(recording_corrected_dict)>1:
                rec_for_sorting=si.aggregate_channels(rec_for_sorting)
                sorting_spikes = ss.run_sorter_by_property(
                sorter_name=this_sorter,
                recording=rec_for_sorting,
                remove_existing_folder=True,
                grouping_property='group',
                folder=oe_folder / result_folder_name,
                verbose=True,
                **sorter_params
                )

spikeinterface_log.json

chiyu1203 avatar May 06 '25 11:05 chiyu1203

Can you launch the sorter with the option verbose=True, to get a better sense on where it is crashing? This is strange because SC2 can now work with 4096 channels, so crashing for 96 is suspicious. Are you using the version from main (latest git version), or from a pypi release? I would encourage to use latest git release. Then, also (but it can not be the reason of the crash) I would not necessarily recommend changing the radius_um to 150. I would let it to 100 by default. If you could provide me your probe layout, I could try some tests

yger avatar May 06 '25 11:05 yger

Hi @yger, thanks for your quick response. I have turned verbose=True and attached files from the crash. Would it be possible that there was no spikes in this multi-shank recording session, which somehow caused the issue? In the previous single-shank recording session, there were definitely spikes and spykingcircus2 detected them. However, in this multi-shank recording session, I am less sure about that (I have applied kilosort4 to this recording session already and kilosort4 sorted out a lot of noise including those from computer monitors at around 144 Hz and electrical noise at 50 Hz). And thanks for suggesting parameters. I set it to 150 um was simply because for whitening, I set the parameters to 150 um (I have tested several radius_um for whitening and reported the result here #3483, but I was never sure what radius works the best) The single-shank recording session was done with CambridgeNeurotech H5, which is what I will focus on in the future, whereas the multi-shank recording session was done with a stacked of CambridgeNeurotech H10. I am doing acute recording on walking insects with these probes and there is not much information about this type of recording (as well as the probes). So I would greatly appreciate your advice on tuning paremters of spykingcircus2

results_SC2.zip

ASSY-77-H5.json

H10_stacked_probes_2D.json

chiyu1203 avatar May 06 '25 13:05 chiyu1203

The error means that the file is not copied in memory. Strange. Can you run with the option si.run_sorter("spykingcircus2", ..., cache_preprocessing={"mode": "no-cache"}, ...) to see if the code is running? I'll write the docs of SC2 soon to document such function

yger avatar May 06 '25 20:05 yger

Hi @yger, I tried that with sorter_params['cache_preprocessing'].update({"mode": "no-cache"}) and it worked! Then I bumped into another issue when spykingcircus2 analysed the 2nd shank.

Error running spykingcircus2 Traceback (most recent call last): File "c:\Users\einat\Documents\GitHub\ephys\raw2si.py", line 516, in raw2si(thisDir, json_file) File "c:\Users\einat\Documents\GitHub\ephys\raw2si.py", line 461, in raw2si sorting_spikes = ss.run_sorter_by_property( File "C:\Users\einat\Documents\GitHub\spikeinterface\src\spikeinterface\sorters\launcher.py", line 308, in run_sorter_by_property sorting_list = run_sorter_jobs(job_list, engine=engine, engine_kwargs=engine_kwargs, return_output=True) File "C:\Users\einat\Documents\GitHub\spikeinterface\src\spikeinterface\sorters\launcher.py", line 106, in run_sorter_jobs sorting = run_sorter(**kwargs) File "C:\Users\einat\Documents\GitHub\spikeinterface\src\spikeinterface\sorters\runsorter.py", line 199, in run_sorter return run_sorter_local(**common_kwargs) File "C:\Users\einat\Documents\GitHub\spikeinterface\src\spikeinterface\sorters\runsorter.py", line 261, in run_sorter_local SorterClass.run_from_folder(folder, raise_error, verbose) File "C:\Users\einat\Documents\GitHub\spikeinterface\src\spikeinterface\sorters\basesorter.py", line 310, in run_from_folder raise SpikeSortingError( spikeinterface.sorters.utils.misc.SpikeSortingError: Spike sorting error trace: Traceback (most recent call last): File "C:\Users\einat\Documents\GitHub\spikeinterface\src\spikeinterface\sorters\basesorter.py", line 270, in run_from_folder SorterClass._run_from_folder(sorter_output_folder, sorter_params, verbose) File "C:\Users\einat\Documents\GitHub\spikeinterface\src\spikeinterface\sorters\internal\spyking_circus2.py", line 293, in _run_from_folder templates_array = estimate_templates( File "C:\Users\einat\Documents\GitHub\spikeinterface\src\spikeinterface\core\waveform_tools.py", line 753, in estimate_templates templates_array = estimate_templates_with_accumulator( File "C:\Users\einat\Documents\GitHub\spikeinterface\src\spikeinterface\core\waveform_tools.py", line 827, in estimate_templates_with_accumulator assert spikes.size > 0, "estimate_templates() need non empty sorting" AssertionError: estimate_templates() need non empty sorting

I guess this is just because no spikes was detected in this shank?! If this is the case, is there an option to ignore that error and keep applying the sorter to the rest of the shanks? Typically, I do mult-shank recording and sort out spikes ss.run_sorter_by_property shank by shank, so I do not mind if one or two shanks did not capture any spikes in the experiment. Yes, a documentation would be great! I can guess what those parameters mean but do not know which ones you would recommend us to change when things do not go well. Personally, I would love to have a sorter more conservative about picking up putative spikes and that is why I am moving away from kilosort.

chiyu1203 avatar May 07 '25 16:05 chiyu1203

From your error message, it looks like you are not using the latest version of SC2 and of spikeinterface. Could you try to update from git (maybe with pypi it will work since there was a release today). Hopefully this error will also disappear, otherwise I'll try to make a patch, and/or to better catch the case where no spikes are detected

yger avatar May 07 '25 18:05 yger

That's true. Sorry, I forgot to mention that. This spikeinterface was built from the source (version 0.102.2). The last time I pulled this is one month ago.


I just pulled the latest version and re-run the analysis. (I have also updated the spikeinterface via pip) Still I saw this error if I turned off sorter_params['cache_preprocessing'].update({"mode": "no-cache"})

0.zip

Exception in initializer: Traceback (most recent call last): File "c:\Users\einat\anaconda3\envs\spike_interface\lib\concurrent\futures\process.py", line 233, in _process_worker initializer(*initargs) File "c:\Users\einat\anaconda3\envs\spike_interface\lib\site-packages\spikeinterface\core\job_tools.py", line 614, in process_worker_initializer worker_dict = init_func(*init_args) File "c:\Users\einat\anaconda3\envs\spike_interface\lib\site-packages\spikeinterface\core\recording_tools.py", line 266, in _init_memory_worker shm = SharedMemory(shm_names[i]) File "c:\Users\einat\anaconda3\envs\spike_interface\lib\multiprocessing\shared_memory.py", line 180, in init self._mmap = mmap.mmap(-1, size, tagname=name) OSError: [WinError 1450] Insufficient system resources exist to complete the requested service

On the other hand, I did not see the error about

AssertionError: estimate_templates() need non empty sorting anymore if I turned on sorter_params['cache_preprocessing'].update({"mode": "no-cache"}). Putative spikes were detected in every shank this time so I was not sure what was not working before.

chiyu1203 avatar May 07 '25 20:05 chiyu1203