pycortex icon indicating copy to clipboard operation
pycortex copied to clipboard

View cortical thickness in 3D on flatmap in webgl

Open alexhuth opened this issue 10 years ago • 4 comments

It would be great to be able to see the cortical thickness in 3D on the flatmap. For example, the flat surface could serve as the white matter surface, and then the pial surface could be rendered on top of it, each vertex offset by the cortical thickness at that vertex.

This could also enable nice through-cortex views.

alexhuth avatar Oct 08 '13 00:10 alexhuth

Ugh, I've now tried FOUR different varieties of this, all of which have been plagued by problems. The four attempts:

  1. Duplicate the surface n times, then draw each layer sequentially using alpha-blending. This works, but is incorrect since the back face cannot be drawn without depth errors. Drawing the back face requires depth peeling, which is technically not possible with webgl without dozens of render passes.
  2. Using a "weighted average" of each surface for order-independent transparency. This works, but removes depth information since stacking order no longer matters. Also, it is very slow
  3. "Point halo" -- draw a gl_point for each vertex, setting the gl_pointsize to be equal to the thickness of the cortex. Then, sample the underlying volume through the depth. This does NOT work, since gl_PointSize for most devices is limited to between 1 and 63, which is not nearly large enough.
  4. Draw a quad for each vertex, doing the same as above. This does NOT work, because it does not correctly distort the sampling as you inflate the surface

Option 1 is effectively trivial to implement at this point. It would also be the slowest, and the most "wrong" in terms of volume integration. Option 4 would be 100% correct at the fiducial surface, but it's unclear what should be done at the inflated and flattened surfaces. Any suggestions?

jamesgao avatar Dec 24 '13 04:12 jamesgao

I think we should make 1 and 4 available as options. Both are quick and at least right somewhere: 1 at the flat surface and 4 at the fiducial surface.

On Dec 23, 2013, at 20:18, James Gao [email protected] wrote:

Ugh, I've now tried FOUR different varieties of this, all of which have been plagued by problems. The four attempts:

Duplicate the surface n times, then draw each layer sequentially using alpha-blending. This works, but is incorrect since the back face cannot be drawn without depth errors. Drawing the back face requires depth peeling, which is technically not possible with webgl without dozens of render passes. Using a "weighted average" of each surface for order-independent transparency. This works, but removes depth information since stacking order no longer matters. Also, it is very slow "Point halo" -- draw a gl_point for each vertex, setting the gl_pointsize to be equal to the thickness of the cortex. Then, sample the underlying volume through the depth. This does NOT work, since gl_PointSize for most devices is limited to between 1 and 63, which is not nearly large enough. Draw a quad for each vertex, doing the same as above. This does NOT work, because it does not correctly distort the sampling as you inflate the surface Option 1 is effectively trivial to implement at this point. It would also be the slowest, and the most "wrong" in terms of volume integration. Option 4 would be 100% correct at the fiducial surface, but it's unclear what should be done at the inflated and flattened surfaces. Any suggestions?

— Reply to this email directly or view it on GitHub.

alexhuth avatar Dec 31 '13 00:12 alexhuth

Ok, after some more introspection, I figured out that option 4 is out. While the quads will work when looking through the edge of the brain, straight on will be wrong since either there will be no coverage (if the quads are aligned edge on to the camera) or there will be only a single layer (if the quads are aligned face on to the camera). You'd have to build two separate rendering mechanisms, one with sheets for dot(camera, normal) >= 0.5 and one with quads for dot(camera, normal) < 0.5. This is far too complicated.

However, I did figure out a way to make option 2 a little more correct. I don't think this is in any literature on order-independent transparency, so that's kinda exciting. Instead of doing a straight average of the fragment values, we can do a depth-weighted average of the fragments using additive blending. We precompute the approximate z-near and z-far values. Again, n "sheets" of cortex are rendered. For every fragment, we compute the approximate normalized depth (z-znear) / (zfar - znear), weight the sampled value by that depth and accumulate it. Then on the full-screen quad pass, we divide by the accumulated depth and colormap the resulting value.

This option is "more correct", but not exact. Things with a large depth gap between two surfaces will be more wrong. However, it does reintroduce depth ordering into the equation, which hopefully will make it look more logical!

jamesgao avatar Jan 07 '14 18:01 jamesgao

What's the status on this?

alexhuth avatar Feb 04 '14 19:02 alexhuth