med-seg-diff-pytorch
med-seg-diff-pytorch copied to clipboard
Trainloss:Very high volatility in loss
Hello, thanks for your codes. They are elegant and clear. These codes help me a lot.
I got a problem as the training loss performed very well about 0.001 at the beginning of the training.
The default end epoch is set as 10000. But the training loss will get a surprising number about "Training Loss : 325440.0592" at 2000+ epochs. I am curious. Have you ever encountered this issue before?
The training batch size is 96 with 4 GPUs with PyTorch.DDP. Since the full training data set only includes about 4000 images, 4 GPUs only need about 10 iterations to end an epoch. Do you think this is the reason?
Thanks for your codes.
I have the same problem. Have you solved the problem?
I have the same problem. Have you solved the problem?
Hi bro. Have you solved the promblm?
I have the same problem. Have you solved the problem?
Hi bro. Have you solved the promblm?
Not yet . Testing with the results of the 10,000th round shows very poor results. It is ok to use the lowest loss before the fluctuation, although the result is not good, probably due to the small size of my dataset, I am trying to adjust the parameters
Hey,guys. Have you solved the promblm?
I have the same problem. Have you solved the problem?
Hi bro. Have you solved the promblm?
Not yet . Testing with the results of the 10,000th round shows very poor results. It is ok to use the lowest loss before the fluctuation, although the result is not good, probably due to the small size of my dataset, I am trying to adjust the parameters
Hey guys. I got the solution. Add a scheduler to control the learning rate. E.g. scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, patience=50, verbose=True, min_lr=1e-6) scheduler.step(THE LOSS YOU DEFINED) But the loss seems that the epoch_loss denotes the batch_loss in the driver.py. So I rewrited the loss. gl ^^
@pnaclcu Good job! Can you share your loss code?
Hi bros, My loss after each epoc is nan (loss value after some batch is nan) I checked input data (image and mask) but there are no problem with data. Does anyone have the same problem with me?
we can set the parameter args.scale_lr == False to solve this problem.