python-neo
python-neo copied to clipboard
2D array_annotations for ImageSequence objects
As already touched upon in issue #745 , one step towards better representing geometries and spatial information of data is to have array_annotations which are of the same dimensionality as the arrangement of channels. In particular, for the case of 2D optical data in ImageSequence objects, the corresponding array_annotations need to be extended to a 2D array to enable pixel-wise annotations.
+1
@rgutzen asked me about what this would look like from an implementation perspective.
The current implementation of array_annotations explicitly checks for them to be 1D. Changing this should be relatively simple by checking for shape instead of length. There is a method _get_arr_ann_length
which should be changed to return the required shape of array_annotations (which would have to be defined somewhere as it is not usually equal to the data shape). Then _normalize_array_annotations
can check for shape instead of length.
Apart from that I found only one place where the shape is currently required to be 1D, which is _merge_array_annotations
when corresponding objects are merged. The desired behavior for this needs to be defined analogously to merging of the objects themselves. As the 2D array_annotations seem to apply mainly to BaseSignal
and subclasses, overriding this method might be required.
Slicing should work with arbitrary types of indexing, thus also for multidimensional arrays. If I didn't miss anything, this means the rest of the implementation should not need to be changed.