python-neo icon indicating copy to clipboard operation
python-neo copied to clipboard

read_neuralynx_sorting difficulties

Open HesTheMan opened this issue 1 year ago • 4 comments

Describe the bug When I use spikeinterface se.read_neuralynx_sorting to create a sorting and then plot the rasters there are more segments than expected

To Reproduce Run the attached script. When I run the script this is what I get 24 1 num_units: 21 num_segments: 92 num spikes per unit: {0: 23328, 1: 3, 2: 77, 3: 15, 4: 481, 5: 57, 6: 2233, 7: 212, 8: 5839, 9: 6870, 10: 2929, 11: 4, 12: 1, 13: 41, 14: 2, 15: 465, 16: 27, 17: 1832, 18: 4194, 19: 12088, 20: 1}

Expected behaviour I was hoping to be able to get a raster plot There are no exceptions reported

Environment:

  • OS: Windows
  • Python version 3.11.1
  • Neo version neo==0.13.2
  • NumPy version numpy==1.24.4

Additional context Add any other context about the problem here. NeuralynxSpikeInteface.zip

HesTheMan avatar Aug 09 '24 00:08 HesTheMan

Thanks @HesTheMan. I'm busy for a couple days, but I'll try to check this out soon unless someone else on the team has time. Also maybe @PeterNSteinmetz knows more about neuralynx segment dividing vs the neuralynx gui which seems to just ignore any hiccups in the recording times. But since I don't know the system as well if Peter doesn't know then I'll dig into what you shared soon.

zm711 avatar Aug 09 '24 12:08 zm711

Yes, depending on the files, there may in fact be time gaps between the records which cause there to technically be more segments. IIRC the new neo.rawio.NeuralynxRawIO constructor has an option to ignore these gaps. However, I don’t know that SpikeInterface accepts those and passes them on.

One could try writing code to instantiate the RawIO directly and use that argument.

PeterNSteinmetz avatar Aug 10 '24 00:08 PeterNSteinmetz

I mean that seems like an option to try. For spikeinterface we propagate all arguments ( we miss them sometimes, but the machinery is there so it should be trivial to add if we missed it--mostly because we sometimes have to do version checks for new kwargs). @HesTheMan could you try what Peter said and see what happens if you ignore the gaps when doing the reading. If that fails then we can explore more. But what Peter is getting at is that since Neo needs to account for all reading formats we are a little limited in how we can handle record dropping so you using a neuralynx software directly might behave slightly differently than we do. So I just need to determine if this is enforced difference due to the Neo model or something we could iterate on.

zm711 avatar Aug 12 '24 15:08 zm711

I just checked and the option would be "strict_gap_mode=False" .

PeterNSteinmetz avatar Aug 12 '24 15:08 PeterNSteinmetz

Thanks for the suggestion. I just tried recording = se.NeuralynxRecordingExtractor(nldirectory,stream_id='0',strict_gap_mode=False) with the identical results. Additional suggestions are greatly appreciated.

HesTheMan avatar Sep 08 '24 11:09 HesTheMan

I tried sorting = se.read_neuralynx_sorting(nldirectory,32000,stream_id='0',strict_gap_mode=False) and get the error line 25, in sorting = se.read_neuralynx_sorting(nldirectory,32000,stream_id='0',strict_gap_mode=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: NeuralynxSortingExtractor.init() got an unexpected keyword argument 'strict_gap_mode'

HesTheMan avatar Sep 08 '24 11:09 HesTheMan

That means that your version of spikeinterface is too old to use that argument. What version of spikeinterface do you have? You would be better off updating. Or if you want to do to see if this could work then I would do:

from neo.rawio import NeuralynxRawIO
reader = NeuralynxRawIO(nldirectory, strict_gap_mode=True)
reader.parse_header()

Since the spikeinterface wrapper takes care a lot of stuff when you request stream_id='0' you may need to specify some features for testing listed here

So if you want to test the Neo fix first before changing your spikeinterface do the above. If you want to just test with the ease of the spikeinterface wrapper update your spikeinterface and run the exact same code you ran above again.

zm711 avatar Sep 08 '24 12:09 zm711

I am using spikeinterface version 0.101 and neo version 0.13.3

si.version '0.101.0' neo.version '0.13.3'

Here is what I see when I use neo and strict_gap_mode=true

reader NeuralynxRawIO: C:\Users\The Man\Documents\Neural Signals\2024 Analysis\SpikeInterface\NeuralynxSpikeInteface\nldirectory nb_block: 1 nb_segment: [92] signal_streams: [Stream (rate,#packet,t0): (32000.0, 5791, 1692964675954435) (chans: 1)] signal_channels: [CSC1] spike_channels: [chSE1#0#0, chSE1#0#1, chSE1#0#2, chSE1#0#4 ... chSE1#0#22 , chSE1#0#23 , chSE1#0#24 , chSE1#0#25] event_channels: [] And with strict_gap_mode=false

NeuralynxRawIO: C:\Users\The Man\Documents\Neural Signals\2024 Analysis\SpikeInterface\NeuralynxSpikeInteface\nldirectory nb_block: 1 nb_segment: [24] signal_streams: [Stream (rate,#packet,t0): (32000.0, 5791, 1692964675954435) (chans: 1)] signal_channels: [CSC1] spike_channels: [chSE1#0#0, chSE1#0#1, chSE1#0#2, chSE1#0#4 ... chSE1#0#22 , chSE1#0#23 , chSE1#0#24 , chSE1#0#25] event_channels: []

So the number of segments went down 24 segments strict_gap_mode=False

HesTheMan avatar Sep 08 '24 13:09 HesTheMan

Hi @HesTheMan is this data big ? The problem with neuralynx gaps being being fake gaps or real gaps is tricky! Iy you are sure that you should one unique segment without any gaps then the gap detector is still too strict even with strict_gap_mode=False. So I would need this dataset to make some statistic about the gaps tolerance

samuelgarcia avatar Sep 16 '24 15:09 samuelgarcia

Or at least one of the files. I actually have a java jar which contains a program that will dump the lengths and times of each block between the gaps as detected in strict mode. Let me know if you would like a copy of that.

PeterNSteinmetz avatar Sep 16 '24 15:09 PeterNSteinmetz

NeuralynxSpikeInteface.zip Here is a recording of one of the channels

HesTheMan avatar Sep 20 '24 10:09 HesTheMan

There is actually another issue with this file. It is produced by a Cheetah revision '6.4.1 Development' which is not recognized by the current Neuralynx code.

I should be able to come up with fix by later today.

PeterNSteinmetz avatar Sep 20 '24 16:09 PeterNSteinmetz

So I had a further look at this. The current code apparently does tolerate version '6.4.1 Development' and there is a test of it in the PR https://github.com/NeuralEnsemble/python-neo/pull/1563 now (though I see that is not working yet).

This file will load but has a lot of gaps, some of them as long as 3.48 s. I attach here a spreadsheet listing the segments of contiguous records and their start and end times, as well as the gaps.

These gaps are so large that I am not sure one wants to try and automatically skip over them.

A workaround in cases like this is to use the neo.rawio.NeuralynxRawIO class to load the files, then look at its segment structure. One can then gather the samples for segments and join them together as appropriate for the application.

It is up to @samuelgarcia but I suppose one could have a design with a skipping parameter which can be 0, which means the old style behavior, an adjustable parameter which will join gaps up to that length, and where a value of Inf would mean just join everything together regardless. The time base will be off slightly whenever one skips gaps and this error will become progressively larger with larger tolerances. ncsBlocks.ods

PeterNSteinmetz avatar Sep 21 '24 19:09 PeterNSteinmetz

Thanks for taking the time to look at the data. The ncs files were created as the signals were being processed to extract spikes. Maybe that is the reason for the choppy recordings. The threshold and cluster criteria were set really low so the equipment is getting flooded with spikes. I see the same gaps as you found the spreadsheet. Gaps If the timestamps are used to resync is there a progressive issue?

HesTheMan avatar Sep 22 '24 16:09 HesTheMan

I don't understand how those ncs files are being created by the spike extraction process. Normally this is the raw continuous data from which the spikes are extracted.

PeterNSteinmetz avatar Sep 23 '24 00:09 PeterNSteinmetz

I just mean the equipment is recording raw continuous data but at the same time it is extracting spikes. Just trying to find a reason why there are so many gaps.

HesTheMan avatar Sep 23 '24 00:09 HesTheMan

The gaps are usually due to either someone starting and stopping the recording (though that seems unlikely for the brief gaps present here) or are due to mis-configuration of the recording system. This can happen depending on how the system configuration files for the recordings are set up.

If you want to fix that, as a next step I would suggest you contact the technical person responsible for the system or Neuralynx technical support. Of course you could just ignore the problem and treat the segments as one continuous stream, though that will likely cause a timestamp mismatch between the extracted spikes and the samples.

PeterNSteinmetz avatar Sep 23 '24 00:09 PeterNSteinmetz

I agree. I am the technical person. I will do some more recordings and work with Neuralynx to improve the configuration settings. When I plot the extracted spikes and the continuous they line up correctly if I use the timestamps in the ncs and nse files.

HesTheMan avatar Sep 23 '24 01:09 HesTheMan

It occurs to me you might also have a look at the log file generated by Neuralynx during the recordings. There may be some errors being logged at the gap times. These can be the result of buffer overflows or hardware errors.

PeterNSteinmetz avatar Sep 23 '24 01:09 PeterNSteinmetz

Can we close? Was this solved?

zm711 avatar Oct 11 '24 13:10 zm711

OK to close. Thank you very much for your help. My take away is the Neuralynx recordings I have been trying to analyze have gaps in the continuous data and these are being interpreted as segments. The next step is to go back and clean up the recording process.

HesTheMan avatar Oct 13 '24 17:10 HesTheMan

Sounds good. Feel free to open new issues if things come up! :)

zm711 avatar Oct 13 '24 17:10 zm711