TricubeNet
TricubeNet copied to clipboard
Saving weight
` best_loss = 999999999 best_dist = 999999999 start = time.time()
for epoch in range(0, args.epochs): if distributed: train_sampler.set_epoch(epoch)
# train for one epoch
train(train_loader, model, criterion, optimizer, scheduler, device, start, epoch, args)
# evaluate on validation set
val_loss, val_dist = validate(valid_loader, model, criterion, device, epoch, args)
# save checkpoint
if args.local_rank == 0:
if best_loss <= val_loss:
best_loss = val_loss
save_checkpoint(model, optimizer, epoch, "best_loss", args.save_path)
if best_dist <= val_dist:
best_dist = val_dist
save_checkpoint(model, optimizer, epoch, "best_dist", args.save_path)
` Excuse me,may I ask if the weight cannot be saved and the loss keeps decreasing, is it the problem of initializing best_loss?
Thank you for pointing out it!
It is a bug :(
We updated the code to be if best_loss >= val_loss
and if best_dist >= val_dist
.
Thank you very much!
------------------ 原始邮件 ------------------ 发件人: "qjadud1994/TricubeNet" @.>; 发送时间: 2023年4月4日(星期二) 下午4:17 @.>; @.@.>; 主题: Re: [qjadud1994/TricubeNet] Saving weight (Issue #4)
Thank you for pointing out it! It is a bug :( We updated the code to be if best_loss >= val_loss and if best_dist >= val_dist.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>