med-seg-diff-pytorch icon indicating copy to clipboard operation
med-seg-diff-pytorch copied to clipboard

Trainloss:Very high volatility in loss

Open pnaclcu opened this issue 1 year ago • 8 comments

Hello, thanks for your codes. They are elegant and clear. These codes help me a lot. I got a problem as the training loss performed very well about 0.001 at the beginning of the training.
The default end epoch is set as 10000. But the training loss will get a surprising number about "Training Loss : 325440.0592" at 2000+ epochs. I am curious. Have you ever encountered this issue before? The training batch size is 96 with 4 GPUs with PyTorch.DDP. Since the full training data set only includes about 4000 images, 4 GPUs only need about 10 iterations to end an epoch. Do you think this is the reason? Thanks for your codes.

pnaclcu avatar Mar 30 '23 02:03 pnaclcu

I have the same problem. Have you solved the problem?

yuan5828225 avatar Apr 03 '23 08:04 yuan5828225

I have the same problem. Have you solved the problem?

Hi bro. Have you solved the promblm?

yibochen38 avatar Apr 10 '23 10:04 yibochen38

I have the same problem. Have you solved the problem?

Hi bro. Have you solved the promblm?

Not yet . Testing with the results of the 10,000th round shows very poor results. It is ok to use the lowest loss before the fluctuation, although the result is not good, probably due to the small size of my dataset, I am trying to adjust the parameters

yuan5828225 avatar Apr 10 '23 11:04 yuan5828225

Hey,guys. Have you solved the promblm?

Alan-Py avatar Apr 15 '23 04:04 Alan-Py

I have the same problem. Have you solved the problem?

Hi bro. Have you solved the promblm?

Not yet . Testing with the results of the 10,000th round shows very poor results. It is ok to use the lowest loss before the fluctuation, although the result is not good, probably due to the small size of my dataset, I am trying to adjust the parameters

Hey guys. I got the solution. Add a scheduler to control the learning rate. E.g. scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, patience=50, verbose=True, min_lr=1e-6) scheduler.step(THE LOSS YOU DEFINED) But the loss seems that the epoch_loss denotes the batch_loss in the driver.py. So I rewrited the loss. gl ^^

pnaclcu avatar Apr 15 '23 06:04 pnaclcu

@pnaclcu Good job! Can you share your loss code?

Alan-Py avatar Apr 15 '23 13:04 Alan-Py

Hi bros, My loss after each epoc is nan (loss value after some batch is nan) I checked input data (image and mask) but there are no problem with data. Does anyone have the same problem with me?

nhthanh0809 avatar Jul 07 '23 08:07 nhthanh0809

we can set the parameter args.scale_lr == False to solve this problem.

ChenqinWu avatar Dec 03 '23 11:12 ChenqinWu