ANTs icon indicating copy to clipboard operation
ANTs copied to clipboard

Bug transforming small-FOV EPI with multiple volumes to MNI space with antsApplyTransforms

Open zhanminye opened this issue 8 months ago • 4 comments

Operating system and version

Ubuntu 24.04 in Windows 11 WSL2

CPU architecture

x86_64 (PC, Intel Mac, other Intel/AMD)

ANTs code version

2.5.4.post7-g9c98082

ANTs installation type

Compiled from source

Summary of the problem

I have 7T EPI data (1.3 mm isotropic) with partial brain coverage. The resolution of the anatomical data is 0.65 isotropic. I'd like to transform the functional data into MNI space, but keep the functional resolution.

When using the functional data as the reference image in antsApplyTransforms (affine transformation only), it produces one bug and one issue related to each other.

The issue When transforming only one volume of the run, the transformed volume is correctly rotated, but the brain is cropped due to the small FOV of the reference image (dimension: 176, 176, 70).

The bug When transforming multiple volumes (I tried 5 volumes, and the whole length of the run), it applied transformation to each volume, but the data slab was super off, and the big part of the transformed data slab filled with zero. In the screenshot the contrast was adjusted to show the data slab. It seems to be some wrong interactions to the reference data's small FOV, it was transforming the wrong part of the data. This is unexpected behavior, since the code was exactly the same, only the input data was multi-volume instead of one volume.

Image

When I padded the reference image 50 voxels in all 3 dimensions (dimension: 226, 226, 120), the 5-volume transformation was successful, although the generated data was much bigger (containing a lot of zero-valued voxels).

My question: for small FOV data, is there a way to generate a reference EPI image in the target space (MNI), which has the minimal matrix dimension that doesn't induce additional clipping? Or are there other possible workarounds?

In the .gif animation below, you can see:

  • The anatomy
  • The reference image's FOV (in blue)
  • The one-volume transformation that's successful but cropped
  • The 5-volume transformation that was wrong
  • The 5-volume run first zero-padded then transformed, which was successful, but the data was unnecessarily big

Image

Commands to reproduce the problem.

--verbose \
--interpolation LanczosWindowedSinc \
-d 3 \
-e 3 \
-i S01_run1_5vols.nii \
-o S01_run1_5vols_MNI.nii  \
-r S01_run1_5vols.nii \
-t S01_ses1_anat_MNI_0GenericAffine.mat \
--float

### Output of the command with verbose output.

```Using single precision for computations.
Input time-series image: S01_run1_5vols.nii
Reference image: S01_run1_5vols.nii
=============================================================================
The composite transform comprises the following transforms (in order):
  1. S01_ses1_anat_MNI_0GenericAffine.mat (type = AffineTransform)
=============================================================================
Default pixel value: 0
Interpolation type: WindowedSincInterpolateImageFunction
  Applying transform(s) to timePoint/channel/dimension5 0 (out of 5).
  Applying transform(s) to timePoint/channel/dimension5 1 (out of 5).
  Applying transform(s) to timePoint/channel/dimension5 2 (out of 5).
  Applying transform(s) to timePoint/channel/dimension5 3 (out of 5).
  Applying transform(s) to timePoint/channel/dimension5 4 (out of 5).
Output warped image: S01_run1_5vols_MNI.nii

### Data to reproduce the problem

[S01_run1_5vols_example.zip](https://github.com/user-attachments/files/19815037/S01_run1_5vols_example.zip)

zhanminye avatar Apr 18 '25 18:04 zhanminye

Since you're just using a linear transform, why not use the anatomical image as the reference image to generate the affine transform, then resample the reference resolution to the desired functional resolution?

ntustison avatar Apr 18 '25 18:04 ntustison

The MNI affine matrix was generated between anatomical images.

I would like to keep a minimal EPI data dimension (bounding box) that covers the transformed data slab. The anatomy data's dimension is too big (256, 342, 342), even if I downsample the anatomical image. It would be great to automatically generate the proper bounding box, but I don't know how to do it by code...

Here is a mockup of the situation.

Image

I usually process data in BrainVoyager, it automatically generate the appropriate bounding box, without excessive zero-padding. Even so, the data size is already big. For my 1.2 to 1.3 mm datasets, each single run is about 2 to 3 GB; for 0.8 mm datasets, each single run is 6 to 7 GB.

zhanminye avatar Apr 18 '25 18:04 zhanminye

Automatic determination of an "optimal" bounding box is a different problem than you what you wrote originally. It's certainly not a bug. But that's something that we don't in ANTs that you'd have to develop yourself.

ntustison avatar Apr 18 '25 18:04 ntustison

Here's how I would try to do this:

  1. Extract the first volume of the slab EPI data

  2. Call ResampleImageBySpacing to downsample the anatomical image to 1.3mm isotropic.

  3. Call antsApplyTransforms to transform the first volume to the anatomical space. This will be the "slab reference" image.

  4. Threshold the slab reference image so that it produces a binary mask, 1 in the slab coverage, 0 for background.

  5. Call ExtractRegionFromImageByMask to produce a cropped slab reference image with appropriate padding.

  6. Call antsApplyTransforms to transform the whole EPI series with the padded slab image as the reference.

BTW, as well as using --float to save memory as the program runs, you can specify --output-data-type 2 to write the output as signed short, which should save space on disk. You just have to ensure that the data fits in that range (ie, int16).

cookpa avatar Apr 18 '25 19:04 cookpa