Extracting traces in parallel to speed up get_noise_levels (or any other traces related functions)
We can extent the pipeline machinery as explained in #2380 in order to get data chunks in parallel. The list of chunks passed to the ChunkRecordingExecutor can be customized accordingly, to avoid looping over unnecessary chunks
Is it normal than the behavior of get_random_data_chunks, in main, can allow overlapping chunks? This seems weird to me, and thus the tests are not passing because of some behavior I would not expect. I mean, intuitively when I wan to select N random chunks in a small recording, I expect them to be non-overlapping (to avoid biasing the stats) @alejoe91 @samuelgarcia . But currently, this is not the case....
Note that I'm well aware that the parallelism is adding an rather large overhead, thus such a process can be useful when I/O are slow, for example when getting data from a remote location. Need to be discuss
Is it normal than the behavior of get_random_data_chunks, in main, can allow overlapping chunks? This seems weird to me, and thus the tests are not passing because of some behavior I would not expect. I mean, intuitively when I wan to select N random chunks in a small recording, I expect them to be non-overlapping (to avoid biasing the stats) @alejoe91 @samuelgarcia . But currently, this is not the case....
I think that when possible the chunks should be non overlapping indeed
Salut Pierre.
I am OK with the idea but I am not sure to like the implementation.
The new run_traces_pipeline is more or less a ChunkRecordingExecutor that return traces.
The is not a super good idea because traces of pickle a transimited to main process which make lot a memory bandwith.
I will try to make another implementation with sharemem and without the pipeline mechanism.
The pipeline module is for peak or spikes adding functionnality for random chunk getter make the module more fuzy. no ?
I am open to any suggestion. I just wanted to highlight the potential speedup there, especially for slow/remote I/O. Thanks a lot for having a look into that !
This has been done in #3359 . Can be closed