Multi-Level-Global-Context-Cross-Consistency icon indicating copy to clipboard operation
Multi-Level-Global-Context-Cross-Consistency copied to clipboard

Unable to Reproduce the Final Result

Open thesupermanreturns opened this issue 2 years ago • 4 comments

Hi, We ran the code for 295 epochs. Below is the log after the run of the code. Please help us if we are missing something

epoch [294/295] train_loss 0.2000 supervised_loss 0.1954 consistency_loss 0.0012 train_iou: 0.9596 - val_loss 0.5416 - val_iou 0.6689 - val_SE 0.5690 - val_PC 0.6468 - val_F1 0.5644 - val_ACC 0.7565

We have done this modification for learning rate, as we were encountering the "RuntimeError: For non-complex input tensors, argument alpha must not be a complex number.". BAsed on the link provided by you in other issue

def adjust_learning_rate(optimizer, i_iter, len_loader, max_epoch, power, args): lr = lr_poly(args.base_lr, i_iter, max_epoch*len_loader, power) optimizer.param_groups[0]['lr'] = lr if len(optimizer.param_groups) > 1: optimizer.param_groups[1]['lr'] = lr * 10 return lr

lr_ = adjust_learning_rate(optimizer, iter_num, len(trainloader), max_epoch, 0.9, args)

thesupermanreturns avatar Aug 22 '23 10:08 thesupermanreturns

This may be a problem caused by the learning rate dropping to 0. You can use CosineAnnealingLR.

FengheTan9 avatar Aug 22 '23 11:08 FengheTan9

Could you please provide the code or can you refer us to some link. Thanks for replying

thesupermanreturns avatar Aug 22 '23 12:08 thesupermanreturns

scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer=optimizer, T_max=max_epoch) And end epoch for scheduler.step()

or

adjust -> max_iterations

FengheTan9 avatar Aug 22 '23 14:08 FengheTan9

hello,Is there a formula for calculating this max_iterations?

HNU-CPF avatar Aug 30 '23 02:08 HNU-CPF