spikeinterface
spikeinterface copied to clipboard
Try "Handle motion/drift" documentation as a sphinx gallery
This PR adds a sphinx-gallery page version of the current "Handle motion/drift with spikeinterface". It is an implementation of 'method 2' as described in further detail in #2881 . Very keen to get feedback on the general long-docs build approach + the actual content of the page. Currently the new and old version are both there (this version is renamed and appended with NEW) for comparisons sake).
This PR explores how the current 'How To' files that are build manually could be automated, using the Handle Drift page as an example.
The changes are:
- The handle drift page was rewritten as a sphinx-gallery. It is now linked to, from the 'How To' page, similar to the existing page on which it is based, but with 'NEW' at the end of the name.
- Some code is added to the
conf.pyso that the page is only built with plots when a tag-t handle_driftis added. It outputs to a 'long_tutorials/handle_drift' folder.
So similar to the previous way of doing it, you build these long-to-run pages, it outputs to a specific folder and the images etc. are shared on github. However, now rather than manually rebuilding they are generated with an optional tag on the sphinx-build command. To take a look at this documentation you can pull the PR branch and build the docs in the usual way (without the -t handle_drift) and the page should be there. If you wanted to make a change to the documentation, you would build with -t handle_drift and push the results to git.
A note on reviewing
This PR keeps the old paragraph on interpreting the motion correction outputs from the previous iteration of this docs. This was based on the kilosort outputs and will not make sense here. However, I thought it made sense to leave this and then figure out the best parameters to use, before re-runnign and writing a new paragraph to interpret the outputs. The section I am referring to is # A few comments on the figures:
@samuelgarcia @cwindolf this PR swaps out the kilosort recording for the generate_drifting_recording() in this example. You can see the generated plots in the PR changelog. It would be great if you could advise on the best settings to use for the synthetic recording.
OK this is ready for review now.
I love this! The local workflow is great: by default you do what you always do, then only need to add the tag when you're working on that doc. Perfect! I also think that sphinx_gallery files are easier to read/edit than the rst files (i.e. plot_handle_drift.py is much easier to edit than handle_drift.rst).
By default the CI won't build the long_tutorials, right? And from the discussion in #2881, the plan would be to run all the long tutorials occasionally, maybe before a release? Is that right?
(I'll read the actual tutorial soon!)
@JoeZiminski it is in my to-do list to review this. Just a ping
Hello, I just did the tutorial alongside some of my own data - it was fun! One thing that stuck out: it's not obvious to me when you need to apply draft correction. At the workshop, we saw talks where you could really see the drift in the raw data. But when I look at my traces, I have no idea. And using my data, the three correction methods give pretty different results. The tutorial says "always plot the motion correction information for your recordings, to make sure the correction is behaviour as expected!" but I have no idea what is expected. Anyone have any general tips? @cwindolf @alejoe91
Also, I remember speaking to @Ashkees , who said she spent a while learning about drift before realising her probe didn't need any correction. So maybe it's worth saying which technologies require it and which don't? I don't know which ones do and don't, though! Anyone know which probes are prone to drift? All the high density ones??
Thanks @h-mayorquin, no rush!
Cheers @chrishalcrow this is a great point, I think an additional section 'when to apply motion correction' would be useful before the summary. For example, I had a case recently (Cambridge Neurotech shank with only 16 channels, which I guess these algorithms were not really designed for) where the motion correction was introducing a lot of artefacts. Also, there was not much motion to begin with as all recording sessions were close together in time and not too long, so it was not as necessary. The output looked like this:
On the top left, the peaks look quite stable. But on the top right after correction, they have been smeared across the probe. Also, there are corresponding jumps in the motion vector where something is obviously going wrong in the estimation.
A case of real data with motion in is the pictures of the neuropixels data used in the original version of this tutorial (below). So maybe we can include both to show cases of real data where there is and isn't drift. And this will also show what the motion correction outputs look like when things work / things go wrong.
Good question about the probes! I'm not sure on this, my guess is that for the most part if you have peaks drifting in the data you probably could use some kind of motion correction. But, these algorithms as currently implemented may fail for probes with lower channel number as above? In the KS4 docs I saw ''For probes with fewer channels (around 64 or less) or with sparser spacing (around 50um or more between contacts), drift estimates are not likely to be accurate, so drift correction should be skipped'. (here).
Actually, I just properly looked at their drift correction section which has a very nice explainer, we can link to it here I think.
I think an additional section 'when to apply motion correction' would be useful before the summary....
Thanks @JoeZiminski . Great. The main point I'd missed was: to see if you have drift, you need to apply the drift correction and take a look at the results. And then give some suggestions for what "bad" results would look like: lots of discrete spikes in the motion vectors and a corrected peak depth plot that makes less sense than the original. Kilosort's page is great! Their discussion on timescales also shows why your motion vector shouldn't be trusted, since those spikes are not on timescales of seconds. I don't think you need to include lots of different datasets, just some advice would be good.
So Kilosort suggests that only high-density high-channel-count probes should be motion corrected. I would mention this right at the start.
Thanks @chrishalcrow great points have updated, let me know what you think (I'm not super happy with what I've written not sure if its clear, all feedback welcome!)