uniGradICON icon indicating copy to clipboard operation
uniGradICON copied to clipboard

Model Training

Open JuliGH6 opened this issue 1 year ago • 1 comments

Hi, I was wondering if I could train the model further with another dataset. UniGradIcon does not work too well with my data so I would like to try and train it. Who should I contact?

JuliGH6 avatar Jun 24 '24 14:06 JuliGH6

Hi @JuliGH6 ,

Thanks for your interest in our work!

We are preparing the training and evaluation code for uniGradICON. It will be available after we finish cleaning the code.

In the meantime, you can use the training code of GradICON. It is similar to the training code for uniGradICON except for the dataset and dataloader. Also, you can find more information in #14 regarding how to finetune uniGradICON with the above training code.

Please don't hesitate to let me know if you encounter any further issues while using GradICON's training code.

lintian-a avatar Jul 03 '24 01:07 lintian-a

Hello, there is still no training code so far

Tara-Liu avatar Oct 23 '24 01:10 Tara-Liu

Hi @Tara-Liu ,

You can find the training code at this branch

lintian-a avatar Oct 23 '24 01:10 lintian-a

I see the training scripts resample the images to a fixed shape (e.g. 175 ** 3).

Are the datasets used in the uni/multiGradICON models preprocessed additionally, e.g. cropping/resampling/masking/etc to prepare for the training, or are any/all datasets the original datasets available e.g. in the L2R challenge(s)?

Generally, if I would want to train on a new dataset, what should I consider?

  • image resolution: do fixed/moving images need to be resampled to the same resolution?
  • should I crop / pad the images to have the same image shape?
  • should I crop the images to have the same ROI, e.g. the same vertebrae, or the entire liver?

dyollb avatar Feb 24 '25 09:02 dyollb

We conduct minimal preprocessing for the training dataset, including the resampling and intensity normalization. Additionally, we applied the ROI masking for the lung (COPDGene) and brain (HCP) datasets but not for the others (e.g., abdomen and knee). For the L2R datasets, we only applied resampling and intensity normalization.

The answers to the questions might vary in different scenarios.

  1. The network expects the moving and fixed images to have the same shape. However, the spacing of each image pair could vary. During our training, the spacing of the images is different across datasets of different anatomical structures. I think this could be seen as an augmentation and will not limit the spacing of the input images to a pre-defined value during inference.
  2. We resample images to the same shape. But we did not use crop and pad. The training images could have anisotropic spacing.
  3. If the structures outside the ROI negatively affect the registration, I recommend masking the ROI. For example, if structures outside of the ROI are different between the fixed/moving images and they are very close to the boundary of the ROI. They could cause a large deformation that pulls the ROI away from a good alignment. In this case, masking the ROI might be a good fix. During our training, we apply masks to the lung and brain images.

lintian-a avatar Feb 24 '25 21:02 lintian-a

Thanks for your kind help.

dyollb avatar Feb 24 '25 21:02 dyollb

No problem! Please don't hesitate to let me know if you encounter any issues when training GradICON or fine-tuning from uniGradICON.

lintian-a avatar Feb 25 '25 07:02 lintian-a