pytorch-saltnet icon indicating copy to clipboard operation
pytorch-saltnet copied to clipboard

谢谢作者的时间,我运行了代码后出现这样的错误

Open yangtutuaka opened this issue 1 year ago • 3 comments

(base) C:\YCRS_DATA\YCR_Code\pytorch-saltnet-master>python train.py --vtf --pretrained imagenet --loss-on-center --batch-size 32 --optim adamw --learning-rate 5e-4 --lr-scheduler noam --basenet senet154 --max-epochs 250 --data-fold fold0 --log-dir runs/fold0 --resume runs/fold0/checkpoints/last-checkpoint-fold0.pth Load dataset list_train0_3600: 100%|█████████████████████████████████████████| 3599/3599 [00:03<00:00, 1018.55images/s] Load dataset list_valid0_400: 100%|█████████████████████████████████████████████| 399/399 [00:00<00:00, 974.61images/s] Load dataset list_valid0_400: 100%|█████████████████████████████████████████████| 399/399 [00:00<00:00, 994.73images/s] use cuda N of parameters 827 resuming a checkpoint 'runs/fold0/checkpoints/last-checkpoint-fold0.pth'

Warning the checkpoint 'runs/fold0/checkpoints/last-checkpoint-fold0.pth' doesn't exist! training from scratch!

logging into runs/fold0 training unet... 0%| | 0/250 [00:00<?, ?it/s]C:\Users\ChenRui.Yang\anaconda3\lib\site-packages\torch\optim\lr_scheduler.py:131: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "****

yangtutuaka avatar Jun 07 '23 12:06 yangtutuaka

尝试了好多解决方法,都不行QAQ

yangtutuaka avatar Jun 07 '23 12:06 yangtutuaka

这个代码是在PyTorch0.4上跑的,在新版本中要在lr_scheduler.step() 之前调用 optimizer.step()。 不过似乎这只是一个警告,代码还是一样运行

xuyuan avatar Jun 08 '23 07:06 xuyuan

谢谢作者的时间,先期我也查到是这个问题,我尝试做了修改(不太会修改),在train.py的357行之后加入了 optimizer.step() # 先调用 optimizer.step() ,但是不奏效,代码运行结果一直会卡在这步,不进行计算

logging into runs/fold0 training unet... 0%| | 0/250 [00:00<?, ?it/s]C:\Users\ChenRui.Yang\anaconda3\lib\site-packages\torch\optim\lr_scheduler.py:131: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "****

运行结果还是这样,我不知道问题出在哪里了

yangtutuaka avatar Jun 08 '23 07:06 yangtutuaka