brainrender-napari
brainrender-napari copied to clipboard
Scale meshes to image data
The atlases are displayed in real, micron space. I assume this is so that the images overlay with the meshes. However this causes (what I think is) undesired behaviour, in that if you register your data to an atlas, this registered image doesn't overlay with the atlas in the plugin.
Should the atlases be scaled to the "correct" voxel space?
See image.sc issue for context.
This issue has been mentioned on Image.sc Forum. There might be relevant details there:
https://forum.image.sc/t/brainglobe-brainrender-region-names-on-mouse-hover/94962/2
Not sure - I'd argue we should find a way to scale the registered data on import into napari in microns?
- There are efforts underway to make napari have units
- Not having the image and the meshes overlay can lead to its own confusing issues?
Either way, this should be made consistent across brainglobe and documented well!
Not having the image and the meshes overlay can lead to its own confusing issues?
They should overlay, I was suggesting resampling the meshes to the image. It's not ideal but I don't see another way, other than users needing to load their data and set the scale every time.
I see! We should do that at the atlas generation stage, I guess, to avoid unnecessary compute time?
I think we should just downscale them in napari. brainrender-napari is the only time that the meshes and images are overlaid.
In an ideal world, all image data would have correct metadata, and every image visualisation tool would scale appropriately, but until then, we should fudge it.
I've renamed this issue because I think this is what should be done. It's confusing that brainrender-napari works in a different space to all the other brainglobe tools within napari.
I could be convinced otherwise, but I think we would need to add scales to all our napari plugins at a minimum.
but I think we would need to add scales to all our napari plugins at a minimum.
I think the root contradiction we need to decide how to resolve is
- we want
brainrender-naparito replicatebrainrenderas close as possible (and therefore our meshes - and images, because we want them to match - should have a scale). - we want our napari plugins to be consistent (and apart from
brainrender-naparidon't have a scale (?))
So either,
- we add scales to all BrainGlobe napari plugins
- we document the presence/lack of scale as a key difference between
brainrenderandbrainrender-napari?
I think the key question is, which do we prefer out of:
- Real units for everything in napari (and the associated work involved in scaling everything we work with, and making sure everyone elses layers are scaled)
- Voxels for everything in napari (and accepting that data analysed in the context of one resolution of an atlas will not overlay onto another resolution).
I'm leaning towards the second, as my gut feeling is that it will require the least explanation to users.
Could we maybe go for the 2nd, but brainrender-napari have some functionality to:
- Scale atlases to "real" units
- Scale other layers by arbitrary amounts
This would sort-of solve all the problems?