Charlie Windolf
Charlie Windolf
Another enhancement would be to make `xcorr` n-dimensional. Right now it restricts to vectors, but I think there is nothing in the implementation which forces that. (Apart from the issue...
Would love to use this as well -- would enable, for instance, running `_strong_wolfe` line search from `torch.optim.lbfgs` on a batch of inputs.
Relevant for this issue, another SpikeGLX universal reading project from @jenniferColonell which looks great and handles lots of cases! https://github.com/jenniferColonell/SGLXMetaToCoords
(See https://stackoverflow.com/questions/55424095/error-pickling-a-matlab-object-in-joblib-parallel-context for context on the above)
HI @ThetaS , I wonder if you may be able to avoid this warning if you try setting `radius_um=50` or even higher in `localize_peaks`? I think that 75 or 100um...
I think there are 3 things to think about: - For @ThetaS , 600M sounds like too many spikes. I usually think of ~1M spikes/hr as a rule of thumb...
Looks cool! @oliche 's strategy could be implemented here now.
Yeah... in my experience, more blocks helps to stabilize the estimate (let's say we want numbers within x% of each other across runs with different seeds). The data certainly is...
Yeah, it's wrong! But I don't have any better ideas. Ideally one would be able to subtract away all of the spikes and then MAD the residuals (which would ideally...
A PR here could try to fix https://github.com/SpikeInterface/spikeinterface/issues/2515 as well