python-neo icon indicating copy to clipboard operation
python-neo copied to clipboard

NixIO performance writing and reading spike trains

Open schmitts opened this issue 8 years ago • 6 comments

My use case is writing and reading of hundreds of spike trains with order 10k spikes each. I observe an increasing time it takes for reading and writing of NixIO/hdf5 files as function of the number of spike trains (see below). Is there anything I can tune? The CPU is an Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz.

$ for n in 10 25 50 100 150 200 ; do ./test_spiketrain.py $n 10; done
writing/reading  10 spike train(s) with 10 spike(s) each took  0.47s/ 0.35s ( 0.05s/ 0.04s per train)
writing/reading  25 spike train(s) with 10 spike(s) each took  1.42s/ 1.32s ( 0.06s/ 0.05s per train)
writing/reading  50 spike train(s) with 10 spike(s) each took  4.36s/ 4.52s ( 0.09s/ 0.09s per train)
writing/reading 100 spike train(s) with 10 spike(s) each took 15.00s/16.57s ( 0.15s/ 0.17s per train)
writing/reading 150 spike train(s) with 10 spike(s) each took 31.42s/36.65s ( 0.21s/ 0.24s per train)
writing/reading 200 spike train(s) with 10 spike(s) each took 53.90s/61.84s ( 0.27s/ 0.31s per train)
#!/usr/bin/env python

import neo
import time
import argparse
import numpy as np
from quantities import ms, mV

parser = argparse.ArgumentParser()
parser.add_argument('number_of_trains', type=int)
parser.add_argument('number_of_spikes_per_train', type=int)

args, argv = parser.parse_known_args()

blk = neo.Block()
seg = neo.Segment()
blk.segments.append(seg)

for _ in xrange(args.number_of_trains):
    train = neo.SpikeTrain(times=np.arange(args.number_of_spikes_per_train)*ms,
                           t_stop=args.number_of_spikes_per_train*ms)
    seg.spiketrains.append(train)

start_write = time.time()
neo.NixIO(filename="blk.h5", mode='ow').write(blk)
done_write = time.time()
write_duration = done_write - start_write

start_read = time.time()
neo.NixIO(filename="blk.h5").read_block()
done_read = time.time()
read_duration = done_read - start_read

print "writing/reading {:3d} spike train(s) with {:2d} spike(s) each " \
    "took {:5.2f}s/{:5.2f}s ({:5.2f}s/{:5.2f}s per train)".format(args.number_of_trains,
                                                                  args.number_of_spikes_per_train,
                                                                  write_duration,
                                                                  read_duration,
                                                                  write_duration/args.number_of_trains,
                                                                  read_duration/args.number_of_trains)

schmitts avatar Apr 07 '17 13:04 schmitts

Thanks for the issue and the example.

I'm currently working on resolving the biggest offenders for this issue. There are a couple of NIX operations that don't scale nicely which I've identified these past few weeks. I should have a PR prepared soon. My current tests scale linearly with number of objects for saving and loading, but I'd like to try a few more things before tidying up the code and sending it in.

achilleas-k avatar Apr 07 '17 13:04 achilleas-k

@achilleas-k I think with the last releases the issue of non-linear scaling for writing spiketrains has been adressed, right? But I still think that for systematic use of the NixIO for simulation data with thousands of spiketrains, the NixIO is not ideal. Are there any more ideas for improving the handling of such large numbers of spiketrain objects?

JuliaSprenger avatar Dec 04 '20 09:12 JuliaSprenger

@JuliaSprenger I think that the proposed SpikeTrainList class should alleviate this problem, don't you think?

apdavison avatar Dec 04 '20 09:12 apdavison

Yes I think SpikeTrainList is really important. Not only for format io but for memmory handling also. And also for the main logic of Group object. One Group for 512 channels AnalogSignal and 512 Group for 512 SpikeTrain is a bad apporach at the moment.

samuelgarcia avatar Dec 04 '20 09:12 samuelgarcia

That indeed sounds like it would speed up reads/writes in NIX. A single object write with a large amount of data is much much faster than creating multiple objects to split the data.

A side note: We can also stop splitting AnalogSignals soon which will speed that up as well.

achilleas-k avatar Dec 04 '20 10:12 achilleas-k

Maybe solved in #1022

samuelgarcia avatar Jun 14 '23 08:06 samuelgarcia