ngff
ngff copied to clipboard
Coordinate systems and new coordinate transformations proposal
This PR has four main contributions:
-
The existing axis specification into
coordinateSystem
s - named collections of axes. -
coordinateTransformations
now have "input" and "output" coordinate systems. -
Adds many new useful type of coordinate transformations.
- informative examples of each type are now given in the spec.
-
Describes the array/pixel coordinate system (origin at pixel center)
- as agreed in issue 89 (see below)
-
Adds a
longName
field foraxes
- Also nice for NetCDF interop
See also:
- https://github.com/ome/ngff/issues/84
- https://github.com/ome/ngff/issues/94
- https://github.com/ome/ngff/issues/101
- https://github.com/ome/ngff/issues/89
- https://github.com/ome/ngff/issues/142
This pull request has been mentioned on Image.sc Forum. There might be relevant details there:
https://forum.image.sc/t/ome-ngff-community-call-transforms-and-tables/71792/1
This pull request has been mentioned on Image.sc Forum. There might be relevant details there:
https://forum.image.sc/t/intermission-ome-ngff-0-4-1-bioformats2raw-0-5-0-et-al/72214/1
@sbesson
I really appreciate you're looking over this and for the helpful comments and feedback.
In this context, what is the goal of the transform-details.bs page included in the current PR i.e. would we consider splitting the document into multiple sections?
Initially, I had in mind to put lots of content in the transform-details.bs
page. Mostly examples, discussions, and common use-cases. Semi-recently, I had a change in heart, and felt that some (most?) of it could go in the main document. I could still be convinced otherwise.
So to summarize, right now I have in mind to throw away transform-details
, but could be convinced to migrate some of the content there (e.g. all the examples), if we decide its more appropriate there.
Couple questions about coordinate spaces for non-image elements.
Specifying a coordinate space for coordinate data
For something like point data or polygons, the coordinates of the points will be defined for some coordinate space. Any suggestions on how that would be annotated? Right now there is just a list of coordinate spaces that could apply, how do we say "these coordinates are for this specific coordinate space"?
Mapping coordinates into space with channels
If I have FISH-like data + histology, I could have:
- An image in a coordinate system of
{
"name":"image-space",
"axes": [
{"name": "x", "type": "space"},
{"name": "y", "type": "space"},
{"name": "c", "type": "channel"}
]
}
- Points in a coordinate system:
{
"name":"point-space",
"axes": [
{"name": "x", "type": "space"},
{"name": "y", "type": "space"},
]
}
How do you think we could indicate that "x", "y" axes are shared between these coordinate systems? How can we say that these points can be plotted on top of this image?
My proposal is to specify explicitly a space in which the various elements live, as opposed to say that the points live in the space of the image, or viceversa. This could be done by extending the concept of "array" space (valid for images and labels) to all the other spatial elements (i.e. points, and in the future polygons, etc.).
The array space acts as a default "local" space for images and labels, and this makes possible to specify a transformation to one (or more) shared spaces by listing the transformations from the local space to the shared. Having a default local space for points would make this possible also for those elements.
In practical terms, this could be a way to describe a datasets with two samples that don't share space information and so that we don't want to align in space.
{
"coordinateSystems":[
{
"name":"sample0",
"axes":[
{
"name":"x",
"type":"space"
},
{
"name":"y",
"type":"space"
},
{
"name":"c",
"type":"channel"
}
]
},
{
"name":"sample1",
"axes":[
{
"name":"x",
"type":"space"
},
{
"name":"y",
"type":"space"
},
{
"name":"c",
"type":"channel"
}
]
}
],
"coordinateTransformations":[
{
"name":"sample0_points",
"type":"identity",
"input":"my_storage/points0",
"output":"sample0"
},
{
"name":"sample1_points",
"type":"scale",
"scale":[
1.0,
1.2
],
"input":"my_storage/points1",
"output":"sample1"
},
{
"name":"sample0_image",
"type":"identity",
"input":"my_storage/image0",
"output":"sample0"
},
{
"name":"sample1_image",
"type":"translation",
"translation":[
10.0,
10.0,
1.0
],
"input":"my_storage/image1",
"output":"sample1"
}
]
}
@ivirshup
If I have FISH-like data + histology
How do you think we could indicate that "x", "y" axes are shared between these coordinate systems?
This is a nice example, and is largely what I see identity
being used for - to indicate that some coordinate system (or subset thereof is "the same as" another. But I've defined identity to be invertible (which I like), so to make it go 3d-2d it needs some help, and byDimension
can do the job.
An alternative is to use an affine
, which is not required to have equal input and output dimensions. In this case, we need a 4x2 matrix.
"coordinateSystems" : [
{
"name":"image-space",
"axes": [
{"name": "x", "type": "space"},
{"name": "y", "type": "space"},
{"name": "c", "type": "channel"}
]
}
{
"name":"point-space",
"axes": [
{"name": "x", "type": "space"},
{"name": "y", "type": "space"},
]
}
],
"coordinateTransformations" : [
{
"name" : "option 1 - using identity by dimension",
"type": "byDimension",
"input" : "image-space",
"output" : "point-space",
"transformations": [
{ "type" : "identity", "input" : ["x","y"], "output": ["x","y"] }
]
},
{
"name" : "option 2- using affine",
"type": "affine",
"input" : "image-space",
"output" : "point-space",
"affine" : [ 1,0,0,0,
0,1,0,0]
}
]
This pull request has been mentioned on Image.sc Forum. There might be relevant details there:
https://forum.image.sc/t/ome-ngff-community-call-transforms-and-tables/71792/11
I just wanted to note down my main idea during the OME community meetings (thanks again to the organisers!), which is that coordinate systems should have types ("space" vs "time" vs "channel" vs "space-frequency"), but not units. The units should go on the transforms. Viewers and other tools consuming this metadata can then work in whatever unit is convenient to them, as long as that unit is compatible with the type of the space.
@jni, I'm not sure I totally understood your argument for putting units on the transforms instead of the spaces during the call, but it was also quite late for me 😅. Is this the main argument for it? And is there prior art for putting units on the transforms?
Viewers and other tools consuming this metadata can then work in whatever unit is convenient to them, as long as that unit is compatible with the type of the space.
To me, this seems like an easy solve from the viewer side. It can just define a new coordinate space and a scaling transform to get there. I would imagine you'd effectively also need this for your proposal?
Just to mention it here, I had brought up a point in favor of keeping the unit on the axes. While the base coordinate system on images are i.e. (e.g. pixel-space), we will have data types where this isn't the case. With point and polygon data we'll just have coordinates, and we'll want to know what {"x": 42, "y": 3.7}
means in physical space. This is easy if we can say these points exist in:
{
"axes": [
{"name": "x", "units": "micrometers"},
{"name": "y", "units": "micrometers"}
]
}
I also think that the units should not belong to the transformation because I see affine transformations as unitless. For instance if you scale by 2 it doesn't matter which units are involved, the transformation would act in the same way. Also I don't see convenient to include units to those transformations that are just injecting a lower dimensional space into a bigger one, like this transformations that maps a labels object (segmentation masks) into the cyx space of an image, that is, it goes from the pixel space (axes with array type) to a shared cyx space.
x' = x
y' = y
c' = 0
These type of functions act like an identity on a restricted codomain, which having a unit specified here would feel unnatural.
I have to correct myself, translations are an example of affine transformations that are not unitless, but many transformations are still defined without units (scale, rotations, transformations moving the axes, bydimension, etc.)
Thanks to everyone who attended the community call last week. A few critiques / questions arose in the discussions:
mapAxis vs mapIndex
@dzenanz suggested to remove mapIndex
because its purpose overlaps entirely with mapAxis
(direct mapping of input to
output axes - permutations and projections)
- Someone else agreed but I forgot to write down their name
- I prefer
mapAxis
tomapIndex
but chose to keep both in the proposal because some people at a hackathon preferredmapIndex
choosing between transformations
What to do of more than one transformation have the same input and output spaces (which to use?)
- Choose the "first" one, or allow selection by name (if names provided)
- A similar approach for choosing among options is already described for multiscales
- I will add text describing this possibility
representing paths
This also came up with respect to the tables proposal, so discussion has been forked into its own issue https://github.com/ome/ngff/issues/144
where do units belong?
@jni asked why units belong with axes + coordinateSystems and not say, with the tranformations. The idea being that viewers and tools should be free to work in whatever units they want and not be forced into something by the coordinate system. I see value in making dimensions / axes "identical up to changes of units" (~=) e.g. a spatial axis of unit "mm" ~= a spatial axis of unit "nm". I also completely agree that consuming software and users should not be forced into working with any particular units.
That is a reasonable choice to me, but here's why I like the current approach (with coordinate systems having units).
It semantically matches my idea of "coordinates"
For me, coordinates are the number of units away from the origin of a point. So having a coordinate without a unit doesn't make much sense. Meaning that, if coordinate systems don't have units, then one will generally (always?) have to specify units.
// if coordinate systems DO NOT have units
getImageValueAt( coordinateSystem, point, units )
// if coordinate systems DO have units
getImageValueAt( coordinateSystem, point )
It expect it would also mean that point annotations / rois etc would have to store their numeric coordinates, the coordinate system name, and their units. Whereas if coordinate systems have units attached, then any points in that coordinate system must be understood to have those units.
It encourages being explicit about transformations
Implementations will need to transform between units if they ever need to work with different units. If an implementation wants to work with "the same" coordinate system but with new units (and write it to a zarr store), the spec as written will force it to make a whole new coordinate system with a new name, and with the transformation between them. This is the price we pay for the "convenience" of not having to specify units above.
This relates to @ivirshup 's point above that consuming software are free to work in whatever units they want after scaling appropriately. To make extra clear: the spec is only strict about units on coordinate systems that are serialized and obviously has nothing to say about how things are implemented. So a viewer is free to treat spatial axes of any unit type as "the same" if they so choose.
Aside
We should consider inheriting more unit information from unidata, and could set up some conventions / defaults to make being explicit unnecessary sometimes.
mapIndex/ mapAxis vs permuteIndex/ permuteAxes
Wanted to touch on this from the call. Either is fine, but prefer the permute*
variant. It's probably because I've used API's where this kind of transformation was permutedims
. I also think map
is pretty overloaded, and by name alone one could think mapAxes
does something like apply a function over an axis. Permute is pretty specific to what is being done.
Tbh, I would also prefer permuteDimensions
/ permuteAxes
since it's a permutation of the axes not of an axis.
@ivirshup
While I'm okay with permute*
, as @andy-sweet mentioned in the call, "permutations" are 1-1, whereas these operations can map one input axis to several outputs. Maybe this is okay though.
(Naming is hard)
(Naming is hard)
Yeah...
Probably why the wikipedia page for permutations has a section on "Other uses of the term permutation"
It says the term we want is ordered arrangement, and arrangeDims
arrangeAxes
isn't so bad either.
I also like permuteAxes better than mapAxis. It is not an issue if it can do a little bit more than simple permutation.
Some comments on sequence
. sequence
seems to be the only place where transformations can appear without input
and output
specified. I think this is useful, because if am creating for instance an affine transformation by combining translations
and scale
, then I am operating in the same coordinate system, so one can simply apply the transformations one after each other.
There are nevertheless some edge cases that should be addressed:
-
mapAxis
alone withoutinput
andoutput
coordinate systems specified is not enough to define how the transformation operates - In the case in which the user specifies the
input
andoutput
for the transformations composing asequence
I think that they should match (output
of a transformation equal to theinput
of the next) and otherwise the transformation should be invalid - Some comments related to point 2.
3.1. If the user combines a
translation
andaffine
, as long as the dimensions match, no problem. 3.2. If the user combines atranslation
and amapAxis
, as long as themapAxis
hasinput
andoutput
specified, in theory there is no problem, but the implementation gets more complex. In fact, ifinput
andoutput
are not specified fortranslation
, then the implementation needs to deduce that theoutput
coordinate system oftranslation
is the same as theinput
oftranslation
, which is the same as theinput
ofsequence
. This type of reasoning should be implemented recursively, for the case in which asequence
contain asequence
. 3.3. If the user combines atranslation
and aaffine
, both defined without specifyinginput
andoutput
(but valid because with matching dimensions), and then also combines amapAxis
defining for this bothinput
andoutput
, this still is not enough to define the transformation. In fact there is no way to deduce the output coordinate system of theaffine
transformation unless it is explicitly defined.
My thoughts on this, some possible approaches.
- Leaving things as it is. As long as the implementer is aware of these cases and that the implementation needs to infer the output coordinate systems in some cases, there is no problem and simply the user would get an error if a transformation is defined ambiguously. Pro: maximum flexibility of the specs. Cons: harder implementation and easier to make mistakes as a user since it's easy to pass an affine matrix to a sequence with the wrong order of axes.
-
Requiring input and output for some transformations. Transformations that can operate with a combination of
input
/output
whereinput != output
, in particularaffine
,mapAxis
,sequence
,byDimension
would be required to be specified together with theinput
andoutput
coordinate systems when used inside asequence
. Transformations when it is assured thatinput = output
, that isidentity
,translation
,scale
,rotation
, can be specified inside asequence
without makinginput
andoutput
explicit. Pro: specs still very flexible and less ambiguous. Cons: less clean specs, implementation probably as difficult as for 1.. -
Requiring input and output for all the transformations.. Pro: specs simple to understood, simple implementation, less error possibility for the users. Cons: verbosity, one has to specify all the coordinate systems composing a
sequence
(but it is also true that one could reuse the same coordinate system in all the steps).
The logic for the cases 1 or 2 is about ~100 lines of code (I have implemented something in between 1 and 2, you can find it here in the static functions that start with inferring_cs...), but maybe we want to go directly to case 3 and avoid complications.
EDIT: the code I linked is a wip Python implementation of the transforms specs, at the moment still requiring some rounds of code review. After it is clean I will make some example notebooks and it could be considered to be detached from the spatialdata
repo or reused in ome-zarr-py
. I will follow up on this here: https://github.com/ome/ome-zarr-py/issues/229
For further review, http://api.csswg.org/bikeshed/?url=https://raw.githubusercontent.com/bogovicj/ngff/coord-transforms/latest/index.bs#trafo-md
A few comments (I will have more)
First, I think we can improve the type structure of the transformations. Instead of this (i.e., the current transformation types)
[
{
"type": "scale",
"scale": [1, 1]
},
{
"type": "translation",
"translation": [1, 1]
}
]
consider something like this:
[
{
"type": "scale",
"parameters": [1, 1]
},
{
"type": "translation",
"parameters": [1, 1]
}
]
Besides reducing redundancy (we know it's a scale
transformation based on the type
parameter, so naming an additional field scale
isn't conveying information), this removes the need for a path
field -- we can specify that if parameters
is a string, then it is to be interpreted as a path.
Regarding the path
field -- what's the logic for allowing that for any of the "simple" transforms (scale, translation)? JSON encoding issues?
Also, why is there a need for a sequence
transformation type? Can't we just specify that a) in metadata, transformations are stored in a list, and b) transformations in a list are to be applied in order? Then we define that the empty list represents application of the the identity transformation (and so we don't need an identity transformation either). A single transformation is stored in a list with one element, etc.
Thanks @d-v-b ,
Also, why is there a need for a sequence transformation type?
Think of coordinate systems (CS) as nodes, and coordinate transformations (CT) as edges between them. We have sequence transformations because it can be useful to treat a sequence as one "edge".
My desiderata:
- enable working with the same data in different coordinate systems
- input / output coordinate systems are at the "top level" of a CT
- make input / output coordinate systems generally explicit
- transformations are always the same kind of thing (object or array)
Here are the two ways I can think of to deal with a realistic and not-simple but not very
complicated (only three transformations ) use case either with a sequence
(my preference),
or with an array.
Asking an object about its input/outputs is simpler (I think) than using an array:
- using an object:
-
transform_object.input
/transform_object.output
-
- using an array:
-
transform_array[first].input
/transform_array[last].output
- where
first
andlast
might be the same index - some logic would be necessary here
-
Using a array, we have an array containing one element (necessary for my desideratum 4), which is fine, but unnecessary. Most use cases will have one CT, and wrapping them all in an array feels weird to me.
See the examples below. A sequence of three transformations means
Using a sequence
"coordinateSystems" : [
{
"name" : "in",
"axes" : [
{"name": "x", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "z", "type": "space", "unit": "micrometer"}
]
},
{
"name" : "out",
"axes" : [
{"name": "x", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "z", "type": "space", "unit": "micrometer"}
]
},
{
"name" : "out2",
"axes" : [
{"name": "x", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "z", "type": "space", "unit": "micrometer"}
]
}
],
"coordinateTransformations" : [
{
"type": "sequence",
"input": "in",
"output": "out",
"transformations": [
{ "type": "affine", "affine": [ 1.1, 0.1, 0.1, 1.1, 0.1, 1.1, 0.1, 3.3, 0.1, 0.1, 1.1, 4.4] },
{ "type": "displacement", "path": "../automaticRegistration" },
{ "type": "displacement", "path": "../manualRegistration" }
]
},
{ "type": "affine", "affine": [ 2.1, 0.1, 0.1, 1.1, 0.1, 3.1, 0.1, 3.3, 0.1, 0.1, 4.1, 4.4], "input" : "in", "output" : "out2" },
]
Use a json array being implicit
"coordinateSystems" : [
{
"name" : "in",
"axes" : [
{"name": "x", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "z", "type": "space", "unit": "micrometer"}
]
},
{
"name" : "out",
"axes" : [
{"name": "x", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "z", "type": "space", "unit": "micrometer"}
]
},
{
"name" : "out2",
"axes" : [
{"name": "x", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "z", "type": "space", "unit": "micrometer"}
]
}
],
"coordinateTransformations" : [
[
{ "type": "affine", "affine": [ 1.1, 0.1, 0.1, 1.1, 0.1, 1.1, 0.1, 3.3, 0.1, 0.1, 1.1, 4.4] "input" : "in" },
{ "type": "displacement", "path": "../automaticRegistration" },
{ "type": "displacement", "path": "../manualRegistration", "output" : "out" }
]
[
{ "type": "affine", "affine": [ 2.1, 0.1, 0.1, 1.1, 0.1, 3.1, 0.1, 3.3, 0.1, 0.1, 4.1, 4.4], "input" : "in", "output" : "out2" }
],
]
I want to share these two Python files for dealing with coordinate transformations and coordinate systems in Python. They implement the in-memory representation for a subset of the transformations (identity
, scale
, translation
, mapAxis
, rotation
, affine
, sequence
, byDimension
), and they also enable the following:
- converting the transformations to an affine matrix (n dimensional)
- inverting transformations
- applying the transformations to points
- tests available here: coordinate transformations, coordinate systems
Code for IO is not included (we are dealing with IO in a separate files), but the files above have methods to exports and reads from the JSON representation.
I started a conversation with @will-moore regarding this, I would be happy to remove the code from our repo and put it an separate package (I was thinking ome-zarr-py
) so that it can be refined, expanded and reused. See the conversation here.
@bogovicj if I understand correctly, a sequence
transformation is simply a list of transformations. Given that JSON already provides the array type for expressing lists of things, I don't see the appeal of the sequence
type.
Here's a re-interpretation of your JSON example:
"coordinateSystems" : [
{
"name" : "in",
"axes" : [
{"name": "x", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "z", "type": "space", "unit": "micrometer"}
]
},
{
"name" : "out",
"axes" : [
{"name": "x", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "z", "type": "space", "unit": "micrometer"}
]
},
{
"name" : "out2",
"axes" : [
{"name": "x", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "z", "type": "space", "unit": "micrometer"}
]
}
],
"coordinateTransformations" : [
{
"input": "in",
"output": "out",
"transformations": [
{ "type": "affine", "affine": [ 1.1, 0.1, 0.1, 1.1, 0.1, 1.1, 0.1, 3.3, 0.1, 0.1, 1.1, 4.4] },
{ "type": "displacement", "path": "../automaticRegistration" },
{ "type": "displacement", "path": "../manualRegistration" }
]
},
{
"input": "in",
"output": "out2"
"transformations": [
{"type": "affine",
"affine": [ 2.1, 0.1, 0.1, 1.1, 0.1, 3.1, 0.1, 3.3, 0.1, 0.1, 4.1, 4.4]},
]
]
a few notes on the changes:
- in my version, every transform is part of a sequence, which may have length 1. This is simpler for parsers, because they don't need to check if a transform is a
sequence
type and then iterate over the elements contained inside. Instead, parsers can be sure that they are always working with a json array. - in your example, the
affine
transformation contained in thesequence
transform didn't havein
andout
properties, because these values are represented by thesequence
that contains them. According to the spec, these properties of a transform are unset only if that transform is inside asequence
orinverseOf
transform. Making the structure of a type change iff it is part of a collection is a huge red flag -- beyond the confusing mix of type and value semantics, this will require ugly routines in parsers and libraries that serialize transforms, it's impossible to express via JSON schema, and it introduces a lot of possible error states to watch out for (e.g., transforms that are part of asequence
but havein
andout
set). My version avoids this by takingin
andout
off of individual transformations and attaching those fields to the collection of transformations. By makingin
andout
properties of the transformation collection, they can be simply typed asstring
. I'm confident that wrapping single transformations in an array is a bargain in exchange for a more consistent data model and type definitions that fit inside the semantics of JSON schema.
Could you elaborate a bit more about the definition of the elements of coordinateSystems
? In the above example, "in", "out", and "out2" have identical values. Looking at the type definition of the elements of axes
, it's not clear to me how a coordinate transformation could change type
(can a transform turn a time axis to a space axis?), or unit
(this is basically fixed by the axis type anyway), so really the only thing that could change is the name of the axis? And if axis.name
is the only thing that varies across instances of coordinateSystem
, then it's not clear why the full axes
need to be repeated each time.
And if axis.name is the only thing that varies across instances of coordinateSystem, then it's not clear why the full axes need to be repeated each time.
see https://github.com/bogovicj/ngff/blob/coord-transforms/latest/index.bs#L345-L397
@joshmoore @bogovicj - this is a use case that has come up in using ome-zarr in a neuroglancer context that requires affine transforms. the current approach is using n5, but would be nice if we can move to ome zarr.

is there a plan on rolling out the next iteration of specs for coordinate transform? and is there support for getting affine out before the more complex nonlinear forms?
Thanks, @satra. (Sorry for having missed this.) I'll leave @bogovicj to comment on timeline, and just capture a few additional thoughts/ideas from the Get Your Brain Together Hackathon though this might be more for #84:
- Additional semantics (from
keywords
to@type
) would be useful, e.g., to define a coordinate system as "anatomical" and what "orientation" it has/uses (See also #142) - Provenance of how a transform was chosen / calculated would be useful ("registered with X and these parameters", "came from the acquisition system") though perhaps this should be a more generic mechanism.
- What would it look like to allow a GUI to load transforms from another location? What happens when names are duplicated? (What are the expectations for tools?) Again: this may be a more generic issue. Part of #13?
This pull request has been mentioned on Image.sc Forum. There might be relevant details there:
https://forum.image.sc/t/save-irregular-time-coordinates-in-ome-zarr/82138/2
@bogovicj - just a ping on the affine issue above.
This hasn't moved in a while, so I'm going to try to jump-start the discussion again.
I'm impressed with the work that @bogovicj has put into this, but I also worry that it adds a pretty huge surface area to the spec, and this might be a blocker for adoption.
Some specific concerns, expressed as questions:
- How do would these changes affect software for visualizing images? As far as I know, tools like napari, imagej, and neuroglancer all assume that there is one and only one mapping from array indices to world coordinates, but this PR introduces a multiplicity of mappings. How will image viewers be expected to represent this?
- What fraction of the proposed coordinate transformations are currently supported by visualization software?
- is there a timeline for visualization software to add support for these features? It would be great to hear from the maintainers of these packages about this.
Some specific concerns, expressed as questions:
- How do would these changes affect software for visualizing images? As far as I know, tools like napari, imagej, and neuroglancer all assume that there is one and only one mapping from array indices to world coordinates, but this PR introduces a multiplicity of mappings. How will image viewers be expected to represent this?
- What fraction of the proposed coordinate transformations are currently supported by visualization software?
- is there a timeline for visualization software to add support for these features? It would be great to hear from the maintainers of these packages about this.
We have recently added affine transformations as well as chained affine transformations (composed of any affine-compatible transforms) to Webknossos. We are also adding support for thin plate splines which aren't yet part of this proposal. We currently don't have plans to implement displacement fields.