spikeinterface icon indicating copy to clipboard operation
spikeinterface copied to clipboard

export to phy output is different in the new version of spikeinterface (0.95) _ phy can not open it.

Open Aryo-Zare opened this issue 2 years ago • 9 comments

I encountered the problem while exporting the waveforms to phy for manual curation with the new version of spikeinterface (0.95). With the previous version (0.94) which I had installed a few moonths ago, such a problem did not exist. The export to phy is done seemingly with no error, but phy can not open it.

More importantly, the output files of the extraction is different relative to the previous version of spikeinterface (see screenshots) : in the previous version a separate 'phy' folder was created inside the output folder containing some other items. In the current version of spikeinterface the output does not contain this and instead contains other files.

1 2

May not be related to this but I also tried export QTWEBENGINE_CHROMIUM_FLAGS="--single-process" in the terminal.

The error report from phy :

(env_1) azare@nib03:~$ phy template-gui  /home/azare/groups/PrimNeu/Aryo/analysis/sort/wfe_phy_2/params.py

15:08:12.199 [E] __init__:62          An error has occurred (AssertionError): 
Traceback (most recent call last):
  File "/home/azare/anaconda3/envs/env_1/bin/phy", line 8, in <module>
    sys.exit(phycli())
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/phy/apps/__init__.py", line 159, in cli_template_gui
    template_gui(params_path, **kwargs)
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/phy/apps/template/gui.py", line 209, in template_gui
    model = load_model(params_path)
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/phylib/io/model.py", line 1348, in load_model
    return TemplateModel(**get_template_params(params_path))
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/phylib/io/model.py", line 339, in __init__
    self._load_data()
  File "/home/azare/anaconda3/envs/env_1/lib/python3.10/site-packages/phylib/io/model.py", line 358, in _load_data
    assert self.amplitudes.shape == (ns,)
AssertionError


QWidget: Must construct a QApplication before a QWidget
Aborted

How I export to phy :

import spikeinterface.exporters as exp

exp.export_to_phy(wfe_c , r'/home/azare/groups/PrimNeu/Aryo/analysis/sort/phy_6' ,   compute_pc_features=True, compute_amplitudes=True, peak_sign='both' , copy_binary=True, n_jobs=-1 , chunk_size=30000 , progress_bar=True , verbose=True )

Aryo-Zare avatar Oct 11 '22 09:10 Aryo-Zare

I have very few knowledge on phy.

The hidden .phy folder is generated by phy itself I think.

About this problem with the amplitudes shape, I let @alejoe91 have a look!!

samuelgarcia avatar Oct 13 '22 07:10 samuelgarcia

Hi @Aryo-Zare

Could you share the dataset with me? Seems to be an issue on our side in the shape of the computed amplitudes..

alejoe91 avatar Oct 13 '22 07:10 alejoe91

Hi @alejoe91

I just tested another recording session today & this problem disappeared ! Even though I used the exact syntax as before. I'll further contact you if I see a consistent problem. Aside from this, I'm trying to use cosine similarity index to perhaps manually check for very similar templates to merge. Together with automatic curation using quality metrics, this may relieve the need for using phy.

Aryo-Zare avatar Oct 13 '22 12:10 Aryo-Zare

Thanks @Aryo-Zare

Could you still send me the problematic dataset? I suspect it could be an edge case for spike amplitude computations

alejoe91 avatar Oct 13 '22 13:10 alejoe91

You're welcome Alessio,

Sure. Please find attached the cloud link for the files. https://ncloud.lin-magdeburg.de/s/QKTigdbPgRkifeG

The main recording session is zipped and is inside a folder named 'recording'. It's about 3 minutes. If needed I also attached the probe jason file to generate your probe if you want. I also uploaded the output of export_to_phy in a folder with the same name.

Please do not hesitate to let me know if there is a problem with opening the link.

Best, Aryo

Aryo-Zare avatar Oct 13 '22 16:10 Aryo-Zare

Thank you Aryo, can you also upload the sorting? you can just do sorting.save(folder="sorting")

alejoe91 avatar Oct 13 '22 16:10 alejoe91

Actually, I don't need it ;)

alejoe91 avatar Oct 13 '22 16:10 alejoe91

I found the issue (I think):

Your recording has less samples than the largest spike!!! That's why the amplitudes don't correspond, because the amplitude extraction uses the recording samples..

Are you sure you are using the correct recording?

This is what I tried:

rec = se.read_openephys("2022-05-18_15-49-21/", stream_id="0")
sort = se.read_phy("export_to_phy")

num_samples = rec.get_num_samples()
max_spike_samples = sort.to_spike_vector()["sample_ind"].max()

excess_seconds = (max_spike_samples - num_samples) / rec.sampling_frequency
print(excess_seconds)
# prints ~14.27s

So basically this means that the last spikes happen after the recording is finished. IMO we should check this and trigger an error in this case!

alejoe91 avatar Oct 13 '22 17:10 alejoe91

Thanks a lot Alessio for testing this.

I reran the analysis on the same recording. This time I got an error. I sorted the recording again & extracted the waveforms. The only change I made was increasing the 'detect_threshold' from spyking_circus from 7 to 9.

exp.export_to_phy(wfe_test , r'/home/azare/groups/PrimNeu/Aryo/analysis/sort/wfe_test_phy' ,   compute_pc_features=True, compute_amplitudes=True, peak_sign='both' , n_jobs=-1 , chunk_size=30000 , progress_bar=True , verbose=True )
Warning: empty units have been removed when being exported to Phy
write_binary_recording with n_jobs = 144 and chunk_size = 30000
write_binary_recording: 100%|██████████| 156/156 [00:20<00:00,  7.63it/s]
extract amplitudes: 100%|██████████| 156/156 [00:00<00:00, 274.32it/s]
extract PCs: 100%|██████████| 156/156 [00:01<00:00, 82.20it/s]
Traceback (most recent call last):

  File "/tmp/ipykernel_54904/1253200856.py", line 1, in <module>
    exp.export_to_phy(wfe_test , r'/home/azare/groups/PrimNeu/Aryo/analysis/sort/wfe_test_phy' ,   compute_pc_features=True, compute_amplitudes=True, peak_sign='both' , n_jobs=-1 , chunk_size=30000 , progress_bar=True , verbose=True )

  File "/home/azare/anaconda3/envs/env_11/lib/python3.9/site-packages/spikeinterface/exporters/to_phy.py", line 206, in export_to_phy
    pc_feature_ind[u, :] = best_channels_index[unit_id]

IndexError: index 220 is out of bounds for axis 0 with size 220

The previous commands before this :

srt_test = ss.run_sorter( sorter_name='spykingcircus' , recording=rd_pps , detect_sign=0 , detect_threshold=9 , template_width_ms=2 ,  filter=False ,  output_folder= r'/home/azare/groups/PrimNeu/Aryo/analysis/sort/srt_test' )

wfe_test = si.extract_waveforms( rd_pps , srt_test , ms_before=1, ms_after=1 , n_jobs=-1, chunk_size=30000 ,  verbose=True , folder=r'/home/azare/groups/PrimNeu/Aryo/analysis/sort/wfe_test')

It's strange. I'll see if it happens with other files.

Aryo-Zare avatar Oct 14 '22 13:10 Aryo-Zare

@Aryo-Zare closing this for inactivity. Let us know if you find other problems!

Note that we are refactoring the postprocessing module to better handle sparsity (see #1152 ) and this should solve several issues :)

alejoe91 avatar Dec 28 '22 11:12 alejoe91