Baptiste Grimaud

Results 67 comments of Baptiste Grimaud

For now I just set the `n_jobs` sorter param to 1, I did not directly specify the amount of workers for `estimate_templates`.

I checked it out, I'll let it run overnight to be sure but it has already passed the point where it used to crash. It selected 5 workers as the...

Seems like there's another hangup further down the line : ``` Preprocessing the recording (bandpass filtering + CMR + whitening) noise_level (workers: 20 processes): 100%|██████████████████████████████████████████| 20/20 [00:17

I tried again with the newer version of the PR, but I'm having some trouble replicating what I had before : if I pass the number of cores directly to...

It looks like `get_optimal_n_jobs` was removed in [this](https://github.com/SpikeInterface/spikeinterface/pull/3721/commits/8c400f439a8395f294758db87853a65df5096b8c) commit, I'm not sure if this was intentional since apparently some commits were split into a different PR.

>I get a (no parallelization) tag with each operation I think I found the source of the issue : in `set_optimal_chunk_size`, `job_kwargs` is updated by `job_kwargs = fix_job_kwargs(dict(chunk_duration=f"{chunk_duration}s"))`, which overwrites...

Thanks a lot for the details ! > There was an additional factor of 64 in the original code. This number is the old default (20*64). From that and the...

> If the additional data is at spike level we would need an additional function at API level not do this extractor per extractor. This is what I was wondering...

If this is too much for now, I think the PR can stay as is and just expose unit locations as a property. Regarding individual spikes, being able to load...

> One of the major goals of spikeinterface is creating reproducible pipelines. If we allow the extensions to be loaded then we can't guarantee/ensure that they could be reproduced in...