Scale and order of verts output (how to transform to input frame)?
My data volume consists of xyz input
n1, n2, n3 = 80, 80, 50
xv, yv, zv = torch.meshgrid(
[torch.linspace(-0.2, 0.2, n1), torch.linspace(-0.2, 0.2, n2), torch.linspace(0, 0.5, n3)])
The output from marching cubes is verts, faces where
verts.min(dim=0)
tensor([ 0.7829, 28.6041, 28.3853])
verts.max(dim=0)
values=tensor([40.8141, 56.6223, 65.8527])
How can I transform verts back to the original input space? Guessing by the data ranges, verts is actually zyx data and it uses the coordinate values? Will something like (pardon the hard coding)
verts_xyz = verts.clone()
verts_xyz[:, 0] = verts[:, 2]
verts_xyz[:, 2] = verts[:, 0]
verts_xyz[:, 0] /= 80
verts_xyz[:, 1] /= 80
verts_xyz[:, 2] /= 50
verts_xyz[:, 0] = verts_xyz[:, 0] * 0.4 - 0.2
verts_xyz[:, 1] = verts_xyz[:, 1] * 0.4 - 0.2
verts_xyz[:, 2] = verts_xyz[:, 2] * 0.5 + 0
I remember you should fit the input data into a unit cube before given to mcubes.
If it fails, can you share smaller snippets (but includes whole code) with testing both CPU and GPU tensor, please?
I'm not sure what you mean by fitting the input data into a unit cube because the call to marching cubes just passes in the value of the volume (n1 x n2 x n3 vector with each element a value of the volume), and the threshold. The above method I posted seem to work to normalize the output vertices of marching cubes back to the original frame and scale.
Sorry to keep you waiting so long. Though maybe you have already resolved the problem, but the output of this library is ZYX order, and it has grid-based coordinate system.
Therefore, your provided code would be fine.