python-neo icon indicating copy to clipboard operation
python-neo copied to clipboard

2D array_annotations for ImageSequence objects

Open rgutzen opened this issue 5 years ago • 2 comments

As already touched upon in issue #745 , one step towards better representing geometries and spatial information of data is to have array_annotations which are of the same dimensionality as the arrangement of channels. In particular, for the case of 2D optical data in ImageSequence objects, the corresponding array_annotations need to be extended to a 2D array to enable pixel-wise annotations.

rgutzen avatar Nov 20 '19 11:11 rgutzen

+1

samuelgarcia avatar Nov 20 '19 11:11 samuelgarcia

@rgutzen asked me about what this would look like from an implementation perspective.

The current implementation of array_annotations explicitly checks for them to be 1D. Changing this should be relatively simple by checking for shape instead of length. There is a method _get_arr_ann_length which should be changed to return the required shape of array_annotations (which would have to be defined somewhere as it is not usually equal to the data shape). Then _normalize_array_annotations can check for shape instead of length. Apart from that I found only one place where the shape is currently required to be 1D, which is _merge_array_annotations when corresponding objects are merged. The desired behavior for this needs to be defined analogously to merging of the objects themselves. As the 2D array_annotations seem to apply mainly to BaseSignal and subclasses, overriding this method might be required.

Slicing should work with arbitrary types of indexing, thus also for multidimensional arrays. If I didn't miss anything, this means the rest of the implementation should not need to be changed.

muellerbjoern avatar Feb 12 '20 15:02 muellerbjoern