nnUNet icon indicating copy to clipboard operation
nnUNet copied to clipboard

How can I train a nnUnet to predict for 2d slices?

Open Chuyun-Shen opened this issue 6 months ago • 0 comments

I have a 3D MRI dataset, and I want to train a 2D network so that the network can predict the most central coronal slice (a single 2D slice). For training, I would like to use the most central 20 2D slices to achieve robustness.

I have some approaches in mind, but since I'm not very familiar with nnU-Net v2's code, I hope you can help me determine which one might be better:

  1. Convert the 3D images into multiple PNGs, scale the images so that the spacing is 1x1x1, and then train the model.
  2. Train a 2D network using the 3D NIfTI (.nii.gz) images directly, and then use the 2D network for inference on individual slices. I’m unsure how to perform inference on 2D slices from a 3D volume.
  3. Convert the 3D images into 2D images of shape [0] x shape[1] x 1 in .nii.gz format to preserve some header information. However, this results in 2D 'patch_size': (np.int64(576), np.int64(1)) in 2D plan generated by nnUNetv2_plan_and_preproces.

Looking forward to your reply. Thank you in advance.

Chuyun-Shen avatar Aug 23 '24 01:08 Chuyun-Shen