Pierre Yger
Pierre Yger
I've checked, and the patch is working for now, allowing to keep the speed. Please use it while I'll keep working on a deeper patch
However, I must say that while it seems to run, this is not at the speed of light. I'll keep your 1min long file as an example to optimize everything
For concurrent write to shared array, we'll have to wait for @samuelgarcia inputs. But what I did here is a simple hack: before estimate_template, I adapt the number of jobs...
Yes, I'll keep debugging and making the software work for such a number of channels. At least, in this PR, I've also reduce the memory footprint for the SVD, and...
Indeed, sorry for that. But this branch will still not work on your data. I'll try to finish one with a new clustering that would avoid the cleaning of the...
At least, I brought back the get_optimal_n_jobs (sorry for the mistake) and your fix for set_optimal_chunk_size. Thanks a lot ! Let's make the code work (quickly) on 4225 channels !
You could try #3847 that should avoid both "template estimation" and "detect_mixture", making the code fast and memory efficient for large arrays. This is still a bit experimental, but it...
Is it normal than the behavior of get_random_data_chunks, in main, can allow overlapping chunks? This seems weird to me, and thus the tests are not passing because of some behavior...
Note that I'm well aware that the parallelism is adding an rather large overhead, thus such a process can be useful when I/O are slow, for example when getting data...
I am open to any suggestion. I just wanted to highlight the potential speedup there, especially for slow/remote I/O. Thanks a lot for having a look into that !