Cross-Scale-Non-Local-Attention icon indicating copy to clipboard operation
Cross-Scale-Non-Local-Attention copied to clipboard

Training loss explode when training on my own dataset

Open msxiaojin opened this issue 4 years ago • 1 comments

Hi, thanks for your work! I tried to train the model using my own dataset. I used the default setting except the batchsize. I used batchsize=4 to avoid GPU memory error.

However during training, the training loss suddenly went to Inf after 2 epochs. Have you encounted similar problem? If so, how did you solve it?

Best regards.

msxiaojin avatar Sep 09 '20 08:09 msxiaojin

Hi, please try to reduce patch size and keep batch size at least 8.

HarukiYqM avatar Sep 09 '20 08:09 HarukiYqM