research-contributions icon indicating copy to clipboard operation
research-contributions copied to clipboard

Can you share the loss curve of the Swin-UNETR pre-training process

Open upupming opened this issue 3 years ago • 5 comments

Hi, Thanks for your great work on Swin-UNETR, I am trying to run pre-training on another dataset (~2000 CTs). But the loss curve seems not to be decreased:

image

Could you share your loss curve on the 5050 CTs dataset? Thank you very much!

I am pre-training the model on a single GPU with batch size 2.

upupming avatar Jul 01 '22 01:07 upupming

Hi @upupming

I see. We trained the model using 4 nodes with 8 GPUs (total 32). We noticed scaling up the pre-training to multi-node is important for convergence.

ahatamiz avatar Jul 02 '22 08:07 ahatamiz

Hi @ahatamiz, i meet the same problem ,as i pre-training the model on a single GPU with batch size 2.My input and ground-turth are shown in the image, but the output does not seem to have any resemblance to the original image, do you mean to train on multiple GPUs to have good results x1_aug x1_aug x1_gt x1_gt x1_recon x1_recon

GaoHuaZhang avatar Nov 15 '22 05:11 GaoHuaZhang

hI @GaoHuaZhang ,

Thanks, batch size should be a key point. We don't have the record for this training any more, but we have this https://github.com/Project-MONAI/tutorials/tree/main/self_supervised_pretraining which is a similar strategy pre-trianing, you could see loss curves with this tutorial.

Thanks

tangy5 avatar Nov 15 '22 05:11 tangy5

Hi @ahatamiz, i meet the same problem ,as i pre-training the model on a single GPU with batch size 2.My input and ground-turth are shown in the image, but the output does not seem to have any resemblance to the original image, do you mean to train on multiple GPUs to have good results x1_aug x1_aug x1_gt x1_gt x1_recon x1_recon

Have you solved the problem? I have the same problem as you

lyangfan avatar Apr 06 '23 03:04 lyangfan