Alessio Buccino
Alessio Buccino
@samuelgarcia @h-mayorquin let's move forward woth this!
@h-mayorquin cleaned up a bit and exposed the option at the comparison level. @h-mayorquin can you check? in particular, the implementation of the `agreement_matrix` [here](https://github.com/SpikeInterface/spikeinterface/pull/2192/files#diff-f42aa695033196c34fdc7e5417fc494bf4be98300823fcf17fbbf166a4ba85cdR1301) Also added some simple tests...
Hi @yyyaaaaaaa What spikeinterface version are you using? How large is your recording? If the `save` function is not printing anything, it means it didn't run successfully and so it's...
Can you try with `n_jobs=1`? Just to see if it runs :)
With 1 job it's supposed to be slow. Can you try to gradually increase it? Does it work with 2?
Looking at this again, I have no clue why this is maxing out the CPUs as `n_jobs` seems to be correctly propagated to the `ProcessPoolExecutor` [here](https://github.com/SpikeInterface/spikeinterface/blob/main/src/spikeinterface/postprocessing/principal_component.py#L398)...
@cheydrick Thanks for tracking this down!! Yeah let's keep this open so that we can investigate more
Good guess! So I should just add the max_threads_per_process
Thanks @JulesLebert Realistically, we won't have time to tackle this before mid June. Maybe you'd like to take a stab at it?
Another issue here: https://github.com/SpikeInterface/spikeinterface/issues/2895