pynwb icon indicating copy to clipboard operation
pynwb copied to clipboard

[Feature]: Make `get_data_in_units` not load entire array into memory

Open rly opened this issue 10 months ago • 1 comments

What would you like to see added to PyNWB?

As mentioned in #1880, get_data_in_units() loads the entire dataset into memory. For large datasets, that is impractical and will silently blow up a user's RAM.

Is your feature request related to a problem?

No response

What solution would you like?

What do you think about supporting the syntax timeseries.data_in_units[1000:2000, 5:10], i.e., adding a simple wrapper class WrappedArray that defines __getitem__ and delegates the slice argument to the underlying list / numpy array / h5py.Dataset / zarr.Array object.

We can reuse this wrapper class elsewhere to help with addressing slicing differences between different array backends (https://github.com/NeurodataWithoutBorders/pynwb/issues/1702) and improving performance in h5py slicing (https://github.com/h5py/h5py/issues/293). As mentioned in https://github.com/NeurodataWithoutBorders/pynwb/issues/1702, full unification of these libraries is outside the scope of this project, but I think providing this wrapper class with its few enhancements would only help.

If we do this, the wrapper class would probably live in HDMF.

Do you have any interest in helping implement the feature?

Yes.

Code of Conduct

rly avatar Apr 01 '24 07:04 rly

Interesting idea. I am personally curios about how the implementation of WrappedArray would look like.

Another alternative is to pass a slice as an argument to get_data_in_units but that way the expresiveness of getitem that most people know from numpy is lost.

h-mayorquin avatar Apr 01 '24 23:04 h-mayorquin