antsMultivariateTemplateConstruction2.sh - blurry template
Hi all
I've been trying to construct a template with antsMultivariateTemplateConstruction2.sh, and my output isn't looking as crisp as I was expecting, which I think suggests that the input images aren't registering very well.
My input images are MP2RAGE (UNI) images, collected at 7T (0.75mm isotropic). All have been skull stripped and bias corrected with N4BiasFieldCorrection.
I'm using a target volume (skull stripped) so that I can specify the space and orientation, but do not want this target to contribute to my final template. Here's the command I used:
antsMultivariateTemplateConstruction2.sh\ -d 3\ -o "${outputPath}T1TMP_"\ -a 1\ -b 1\ -c 5\ -g 0.1\ -i 5\ -q 1000x100x100x50x20\ -f 10x6x4x2x1\ -s 5x3x2x1x0vox\ -n 0\ -r 1\ -l 1\ -m CC\ -t SyN\ -k 1\ -u 10:00:00\ -y 0\ -z ../target.nii.gz\ *.nii.gz
I'm not sure if my use of -l, -r, -y, and -z are correct? So in the docs it says:
"Rigid initialization is useful when you do not have an initial template, or you want to use a single image as a reference for rigid alignment only. For example, "-z tpl-MNI152NLin2009cAsym_res-01_T1w.nii.gz -y 0 -r 1" will rigidly align the inputs to the MNI template, and then use their average to begin the template building process."
this sounds like what I want, so I set -r 1 and -y 0, but I'm not sure why I wouldn't use the rigid transformation?
Could it be something to do with the -q, -f, and -s parameters? Or does a different transformation work better?
Here's what my template looks like. At the moment I'm using 10 brains.
Thanks in advance
Using ANTs 2.6.0
The template looks fine to me. What sort of specific expectations are you not seeing in your results?
Hi, I think I was just expecting better white/grey matter contrast. I'll admit I haven't actually seen many outputs from this script so maybe this is typical
You may have a better output by dropping the -r 1 so that the first round of template construction does a full alignment to the target and afterwards evolves away towards the unbiased template.
Assuming there's no image quality issues in the source data, I'd first check registration. Look at the warped images from the last iteration. See if there's outliers, or registration could be better in general.
The default CC radius is 4, you might try CC[2] for better local matching.
Then I would consider the averaging. The normalized mean, -a 1 has been the default forever, but it computes mean over the whole volume. I've found I get better results with some preprocessing. After bias correction, I do a segmentation and then scale each image so that its mean white matter is 1000, then build the template with -a 0.
Even without a segmentation, you can still normalize offline with the brain mask, using
ImageMath 3 normalized.nii.gz Normalize T1w.nii.gz Mask.nii.gz
and then use -a 0.
Lastly, there is a sharpening step applied, which sometimes exacerbates outlier registrations or over-sharpens on small N. To see the effect of this, you can recreate the penultimate template by using AverageImages on the final warped images. It will be blurrier, but you can use ImageMath Laplacian (to replicate the script) or unsharp mask or some other technique from elsewhere.
You may have a better output by dropping the
-r 1so that the first round of template construction does a full alignment to the target and afterwards evolves away towards the unbiased template.
This can also help, but it might take more iterations to converge away from the initial template if you do full deformation right away. I usually build in stages, starting with -r 1 and doing 3 iterations of affine-only registration with -A 0 (no sharpening). Then I use the output as input to deformable registration with -z.
The -y 0 thing was put in to deal with people giving the template images that weren't well aligned in physical space. This is particularly a problem for longitudinal templates, where there's often N=2 or N=3. Sometimes the images would register fine but because the origin was set inconsistently, the registration would include a large translation, and the template would end up getting shifted in undesirable ways (the template wants to be in the "middle" such that the mean displacement to all inputs is minimal).
The other reason people wanted -y 0 is because they wanted their final templates to match some initial alignment, eg so that the mid-sagittal slice lies exactly between the two hemispheres. But even without the rigid update, there can still be residual drift, so I don't use this option much. If the template really needs to be AC-PC or have some other particular position, I just do it after the fact.
This can also help, but it might take more iterations to converge away from the initial template if you do full deformation right away. I usually build in stages, starting with
-r 1and doing 3 iterations of affine-only registration with-A 0(no sharpening). Then I use the output as input to deformable registration with -z.
Oddly enough that's how my https://github.com/CoBrALab/optimized_antsMultivariateTemplateConstruction does it :)
Despite this, my experience has been that very often the cortex arrives at the kind of "blurry" overlap where a dominant unbiased shape never seems to rise out of the average. Starting with a target (say an rigidly-pre-aligned subject to a standard orientation) fixes this.