brainglobe-atlasapi icon indicating copy to clipboard operation
brainglobe-atlasapi copied to clipboard

Sort out confusion with cartesian/ij indexing

Open vigji opened this issue 4 years ago • 55 comments

As per the slack discussion:

There is some ambiguity when working on displaying numpy arrays, coming from the fact that image-first applications (eg: matplotlib, brainrender) have the convention that as the numpy array index increase in a slice, you move downward and rightward in the image. As opposed to this, cartesian plotting assumes that, increasing values, you move upward and rightward in the plot. As a result, inconsistencies might happen. Napari takes care of those inconsistencies by adopting the image-first convention for points as well.

this has a series of consequences:

1.

If we want to describe the raw ARA origin as we get it with the bg-space convention, it is actually “asr” and not “asl”. To be sure about this, we can do the following:

spacecache = ReferenceSpaceCache(
    manifest=download_dir_path / "manifest.json",
    # downloaded files are stored relative to here
    resolution=100,
    reference_space_key="annotation/ccf_2017"
    # use the latest version of the CCF
)
ara_ref, h = spacecache.get_template_volume()
ara_ref[:, :, :57] = ara_ref[:, :, :57] / 2
viewer = napari.view_image(ara_ref)

and we can confirm that what gets dimmed is the right hem and not the left one. Conclusion: specify how ARA description is confusing, and moving standard BG orientation to “asr”

2.

This introduces confusion for applications using cartesian indexes. As a simple example, this

at = BrainGlobeAtlas("example_mouse_100um")
mod_ref = at.reference.copy()
#Create some points:
three_pts = np.array([[50, 40, 80], [60, 40, 80], [60, 40, 20]])# 2 in left hem, 1 in right hem
# Take points from root mesh for scatterplot:
bounds = at.root_mesh().points[::10, :] / 100
# Display everything in napari:
viewer = napari.view_image(mod_ref)
viewer.add_points(bounds, size=1, n_dimensional=True)
viewer.add_points(three_pts, size=10, n_dimensional=True)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(bounds[:, 0], bounds[:, 1], bounds[:, 2], s=1)
ax.scatter(three_pts[:, 0], three_pts[:, 1], three_pts[:, 2], s=20)

produces a correct napari view (with 2 dots in the left hem, and 1 in the right one), but a flipped scatterplot, as I guess brainrender would do. The easiest solution, for ugly that it might sound, would be to just invert one axis (does not really matter which one) in cartesian indexed applications such as brainrender. This:

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(-bounds[:, 0], bounds[:, 1], bounds[:, 2], s=1)
ax.scatter(-three_pts[:, 0], three_pts[:, 1], three_pts[:, 2], s=20)

produces a plot with the correct orientation. For 2D applications (we don’t have any), one would have to flip the y when displaying.

3.

Finally, we always need to consider that when looking at the sliced data ( both with matplotlib and napari viewer in 2D mode), we are looking from the perspective of the slicing coordinate at point 0: for asr orientation, it means looking from the front, which always invert left and right hemispheres (left side of the image is right hemisphere).

This is 100% a display problem, as it will arise with matching underling stack and probe coordinates just by using different kinds of views. So I would not solve it messing with data (eg, exporting “brainrender compatible” coordinates), but by making one application (i would suggest brainrender, flipping an axis) compatible with the other.

What needs to be done:

  • [ ] a thorough section in the documentation that discuss this;
  • [ ] either simply move standard orientation to asr, or better address this concurrent description when instantiating the SpaceConvention from bg-space, in a way that addresses the ambiguity (see https://github.com/brainglobe/bg-space/issues/6). Such solution would be the neatest because then visualisation tools could reference to some SpaceConvention methods to figure out correct flips;
  • [ ] making sure brainrender display images coherent with the napari viewer, ideally going through the above point.

vigji avatar Aug 28 '20 09:08 vigji

making sure brainrender display images coherent with the napari viewer, ideally going through the above point.

would you suggest doing this by flipping an axis?

FedeClaudi avatar Aug 30 '20 19:08 FedeClaudi

Flip an axis would work; but it introduces a random hardcoded flip that might be source of problems and confusion to users.

My suggestion to handle this (and future similar problems) would be

  1. Having in the SpaceConvention class an additional specification on the way the origin is defined - either cartesian or in ij system;
  2. Integrate a bit more the SpaceConvention object in the BrainGlobeAtlas object, so that it has a self.map_atlas_to_space() that you call and transforms lazily all the data you will use from the atlas.

In this way, you just call that method at the instantiation of your BrainRender atlas so that it matches a cartesian definition of the origin and it flips everything accordingly, and no random axes flips should be required brainrender-wise

vigji avatar Aug 30 '20 19:08 vigji

that sounds like less work in brainrender so sounds very good to me :P

FedeClaudi avatar Aug 30 '20 20:08 FedeClaudi

either simply move standard orientation to asr, or better address this concurrent description when instantiating the SpaceConvention from bg-space, in a way that addresses the ambiguity (see brainglobe/bg-space#6).

I think we should address any ambiguity, but that we should "offically" change the standard orientation to asr, as we're pretty much tied to numpy anyway, and I think this makes sense to most scientific python programmers.

In this way, you just call that method at the instantiation of your BrainRender atlas so that it matches a cartesian definition of the origin and it flips everything accordingly, and no random axes flips should be required brainrender-wise

Sounds good, as long as it's in the GUI too :P

adamltyson avatar Sep 01 '20 10:09 adamltyson

once it's implemented in brainrender it will show up in the GUI too :P

FedeClaudi avatar Sep 01 '20 10:09 FedeClaudi

I think we should address any ambiguity, but that we should "offically" change the standard orientation to asr, as we're pretty much tied to numpy anyway, and I think this makes sense to most scientific python programmers.

Agreed, so both changing it to ASR and add the method in bg-space

vigji avatar Sep 01 '20 10:09 vigji

@vigji are you going to take care of it in bg-space? Do you mind giving me a shout when it's done?

FedeClaudi avatar Sep 01 '20 10:09 FedeClaudi

yes, I am thinking about it right now! Will come back to you

vigji avatar Sep 01 '20 10:09 vigji

Sorry, I am tinkering with things and discussing with a very good colleague, and I was getting worried that adding that in bg-space might provide too many degrees of freedom and ambiguity. Is it possible to just apply a transformation matrix to the entire scene in vtk? If that is doable, we can just:

  1. Specify very clearly that all bg indexing is based on the ij-numpy conventional way for images;
  2. change a single point in brainrender that would make it compatible with such indexing (the way napari in its meshes and points related stuff is).

vigji avatar Sep 01 '20 11:09 vigji

mmm I don't think so. If there is I don't know about it... I could ask Marco though... I could apply a transform to each meshes' points just before render though..

FedeClaudi avatar Sep 01 '20 12:09 FedeClaudi

I could apply a transform to each meshes' points just before render though..

No if that is required I would handle that in bg-atlasapi + bg-space. But if it is possible to set a global transformation, that would be the n.1 solution in terms of cleanness I believe

vigji avatar Sep 01 '20 12:09 vigji

Let me know what Marco says. Looking into vtk https://vtk.org/doc/release/5.0/html/a01919.html it should be possible - don't know if vedo exposes something for it, but maybe it would be accessible through vedo's classes?

vigji avatar Sep 01 '20 13:09 vigji

Sounds like there's a way, vedo is so great! https://github.com/marcomusy/vedo/issues/203


ps. it will have to be done on each mesh individually but it should be easy to do this when the mesh is created.

FedeClaudi avatar Sep 01 '20 16:09 FedeClaudi

Mmm ok, I am not a big fan of the idea of doing it separately anywhere...but I guess that having a method for doing it in the atlas class on the atlas meshes would not solve it right? Because there's still all the neurons etc

vigji avatar Sep 01 '20 18:09 vigji

yeah, it would have to be in scene.add_actor and make sure that all various add_region, add_neurons... methods go through that. It's fine really.

The alternative would be to have it happen in render (again for each neuron individuallu). Once you have a matrix transform ready we can play around with it and see whats best

FedeClaudi avatar Sep 01 '20 20:09 FedeClaudi

well that should be easy! Something like

1  0  0  0
0  1  0  0
0  0 -1  0

Should already flip the correct axis

vigji avatar Sep 01 '20 21:09 vigji

Seems to work.

These are MOs neurons from the mouse light project (note somas in the left hemisphere) image

and in brainrender: image

prior to the introduction of the transform the mouselight neurons would show up on the right.


I'd be tempted to implement this just in Render.render() so that it happens at once and we are sure to catch all actors. The only downside I can think of is if someone does stuff to/with a meshe's points before the flip. In that case there might be some surprises. The alternative is to implement it whenever/wherever a mesh object is loaded or created, but that's a bit cumbersome.

FedeClaudi avatar Sep 01 '20 22:09 FedeClaudi

Might have spoken too soon.. it looks indeed that the actors have been reflected, but look at the coordinates on the ML axis: image

FedeClaudi avatar Sep 01 '20 22:09 FedeClaudi

I'd be tempted to implement this just in Render.render() so that it happens at once and we are sure to catch all actors. The only downside I can think of is if someone does stuff to/with a meshe's points before the flip. In that case there might be some surprises.

Definitively go for that option! In this way, you basically turn brainbender into a, ij-based viewer as napari is.

The negative axis is normal, as with that transformation matrix you are flipping around the o. To prevent that, use

1  0  0  0
0  1  0  0
0  0 -1  x

Where x is the size of your space on the L-R axis, in micron

vigji avatar Sep 01 '20 22:09 vigji

Definitively go for that option! In this way, you basically turn brainbender into a, ij-based viewer as napari is.

Which option?

  1. at render (i.e. after all actors have been added to scene)
  2. at load (happens once per actor) ?

also just for reference the matrix needs to be 4x4 so:

1  0  0  0
0  1  0  0
0  0 -1  x
0  0  0  1

FedeClaudi avatar Sep 01 '20 22:09 FedeClaudi

At render! So you have a single entry point for handling everything

vigji avatar Sep 01 '20 22:09 vigji

gotcha!

Regarding the x business above:

space on the L-R axis

is this number somwhere in bg-space or bg-atlasapi? I've been using the bounds of the root mesh, but that's not very accurate (e.g. in the allen atlas there's 200um to the side of the brain which are not included in the meshes' bounds).

Other than that the new matrix works: image

FedeClaudi avatar Sep 01 '20 22:09 FedeClaudi

(e.g. in the allen atlas there's 200um to the side of the brain which are not included in the meshes' bounds).

What do you mean? The mesh is cut?

is this number somwhere in bg-space or bg-atlasapi?

It is shape of the allen stack * resolution, that's as good as we can get I guess...

vigji avatar Sep 01 '20 23:09 vigji

Still, note that if you add an offset, a coordinate [0, 0, 0] will be placed at a different side of the brain compared to the standard stack and bg-atlasapi [0, 0, 0]

vigji avatar Sep 01 '20 23:09 vigji

no, the left-most point in the root mesh is at 200um from the origin along the LR direction.
If you load the atlas in volumetric mode, or if you look at a 2d slice, around the brain's outline there's a bit of space, that's where the 200um come from.

So the mesh. doesn't "touch the edges of the atlas' volume" if that helps. So when you say:

Where x is the size of your space on the L-R axis, in micron

the size of the L-R axis of the entire volume is larger than that of the root mesh.

Still, note that if you add an offset, a coordinate [0, 0, 0] will be placed at a different side of the brain compared to the standard stack and bg-atlasapi [0, 0, 0]

that's why I'm point this out, a cell that is at a specific x,y,z point won't be in the correct coordinate after mirroring as it's currently set up (though ofc visually it will be fine since all meshes will be equally shifted).

I'll try

It is shape of the allen stack * resolution, that's as good as we can get I guess...

FedeClaudi avatar Sep 01 '20 23:09 FedeClaudi

that's why I'm point this out, a cell that is at a specific x,y,z point won't be in the correct coordinate after mirroring as it's currently set up (though ofc visually it will be fine since all meshes will be equally shifted).

Yes, so maybe keeping the negative axis would actually be less confusing the offsetting. At least in that way if I have a cell detected at voxel [10, 20, 30] of my stack at 100um res it will be displayed at [1000, 2000, -3000]; otherwise the 3rd value is completely unrelated to the original one

vigji avatar Sep 01 '20 23:09 vigji

Does vedo support changing the axis ticks labels à là matplotlib?

vigji avatar Sep 01 '20 23:09 vigji

I'd have to play around with it.

But I think I'm misunderstanding something, the problem is not with the offset x, the problem is that x needs to be computed using the volume's size, not the root mesh' size.

E.g., say a cell is at 400um from the origin along the LR axis, and let's say that the total volume was 1000um in width. Further assume that the root mesh leaves 200um on either side. So volume goes 0 -> 1000um and root goes 200->800um.

Currently I'm using root to compute x as 800-200=600. So when I mirror the cell which is originally at 400 I get -400 + 600=200.

If instead I computed x with the volume size (x=1000) I'd get -400+1000=600 which is correct: the cell was at 400 from one edge of the volume and now it's at 400 from the other.

Does this make sense to you? If I use the proper volume size from bg-atlasapi then it should be okay to add the offset I think. I do get your point about leaving the negative value, but I think it's better to add the proper offset, mostly for consistency with other parts of the bg ecosystem.

FedeClaudi avatar Sep 01 '20 23:09 FedeClaudi

Does vedo support changing the axis ticks labels à là matplotlib?

Yes - check out example vedo -r customaxes

Currently I'm using root to compute x as 800-200=600. So when I mirror the cell which is originally at 400 I get -400 + 600=200.

about offsets, maybe one possibility is to define a different origin point with mesh.origin() or volume.origin()

marcomusy avatar Sep 02 '20 00:09 marcomusy

If instead I computed x with the volume size (x=1000) I'd get -400+1000=600 which is correct: the cell was at 400 from one edge of the volume and now it's at 400 from the other.

Yes, exactly, that precisely how I would use the volume size! Actually, we left a shape key in the metadata exactly for calculating the volume size in these scenarios .

But I insist that placing an offset will confuse users: if I detect my probe at some pixel, and I calculate its position in micron, I don't want that to be converted to the position w/r/t the other side of the brain, I will see a different value instead. The correct number would be shown if we take the negative axis, and show it with positive values; so maybe what @marcomusy pointed out could solve this in the best way

vigji avatar Sep 02 '20 06:09 vigji