STalign icon indicating copy to clipboard operation
STalign copied to clipboard

Setting scale parameters

Open JRS92 opened this issue 11 months ago • 2 comments

In the Aligning partial coronal brain sections with the Allen Brain Altas

The scale parameters refer to a scaling of the atlas in (x, y, z) to match the size of the target image.

scale_x = 4 #default = 0.9
scale_y = 4 #default = 0.9
scale_z = 0.9 #default = 0.9

How does one determine correct scaling factors? I am having trouble aligning hemisections to the atlas. I am not sure but I think my troubles may be due to incorrect scaling.

JRS92 avatar Jan 16 '25 18:01 JRS92

The scaling factors are related to how the size of the atlas compares to the size of your image. If you plot both of these using plt.imshow() (setting the x and y to the same length scales), you can determine the relative scales of the image. If the x and y scales of both images are the same, you should use the default scale. Otherwise, you should try to match the scale of the atlas to the original image. If scale_x is 4, the atlas is 4X smaller in the x direction that the image you are aligning.

mmganant avatar Jan 16 '25 19:01 mmganant

The atlas raw image is nearly 10x smaller than the rasterized image of my cell coordinates, so I set x_scale and y_scale to 10. I am still not getting anything close to a good alignment though. Perhaps I'm not understanding which images to compare?

Image
sigmaA = 0.1 #standard deviation of artifact intensities
sigmaB = 0.1 #standard deviation of background intensities
sigmaM = 0.1 #standard deviation of matching tissue intenities
muA = torch.tensor([0.5,0.5,0.5],device='cpu') #average of artifact intensities
muB = torch.tensor([0,0,0],device='cpu') #average of background intensities

scale_x = 9.5 #default = 0.9
scale_y = 8.5 #default = 0.9
scale_z = 0.9 #default = 0.9
theta0 = (np.pi/180)*theta_deg

# get an initial guess
if 'Ti' in locals():
    T = np.array([-xI[0][slice],np.mean(xJ[0])-(Ti[0]*scale_y),np.mean(xJ[1])-(Ti[1]*scale_x)])
else:
    T = np.array([-xI[0][slice],np.mean(xJ[0]),np.mean(xJ[1])])
#T = np.array([-xI[0][slice],0,0])


scale_atlas = np.array([[scale_z,0,0],
                        [0,scale_x,0],
                        [0,0,scale_y]])
L = np.array([[1.0,0.0,0.0],
             [0.0,np.cos(theta0),-np.sin(theta0)],
              [0.0,np.sin(theta0),np.cos(theta0)]])
L = np.matmul(L,scale_atlas)#np.identity(3)


%%time
#returns mat = affine transform, v = velocity, xv = pixel locations of velocity points
transform = STalign.LDDMM_3D_to_slice(
    xI,Inorm,xJ,Jnorm,
    T=T,L=L,
    nt=4,niter=800,
    a=250,
    device='cpu',
    sigmaA = sigmaA, #standard deviation of artifact intensities
    sigmaB = sigmaB, #standard deviation of background intensities
    sigmaM = sigmaM, #standard deviation of matching tissue intenities
    muA = muA, #average of artifact intensities
    muB = muB #average of background intensities
)

Output:

#%%

Image Image

Thanks

JRS92 avatar Jan 17 '25 00:01 JRS92