HRAN
HRAN copied to clipboard
How to deal with PSNR: nan?
--template HRAN --model HRAN --scale 2 --patch_size 96 --save HRAN_x2 --ext sep_reset
train result is :
Making model...
Preparing loss function:
1.000 * L1
[Epoch 1] Learning rate: 1.00e-4
D:\ProgramFiles\Anaconda3\envs\torch3.8\lib\site-packages\torch\optim\lr_scheduler.py:418: UserWarning: To get the last learning rate computed by the scheduler, please use get_last_lr()
.
warnings.warn("To get the last learning rate computed by the scheduler, "
[1600/16000] [L1: 16.4000] 30.3+17.2s
[3200/16000] [L1: 12.6417] 26.4+6.3s
[4800/16000] [L1: 11.0019] 26.4+6.3s
[6400/16000] [L1: 9.8838] 26.5+6.3s
[8000/16000] [L1: 9.0486] 26.5+6.2s
[9600/16000] [L1: 8.5167] 26.4+6.3s
[11200/16000] [L1: 8.0545] 26.6+6.3s
[12800/16000] [L1: 7.7054] 26.4+6.3s
[14400/16000] [L1: 7.4325] 26.4+6.3s
[16000/16000] [L1: 7.1802] 26.5+6.3s
Evaluation: 0it [00:02, ?it/s] [Set5 x2] PSNR: nan (Best: nan @epoch 1) Forward: 2.77s
Saving...
D:\ProgramFiles\Anaconda3\envs\torch3.8\lib\site-packages\torch\optim\lr_scheduler.py:418: UserWarning: To get the last learning rate computed by the scheduler, please use get_last_lr()
.
warnings.warn("To get the last learning rate computed by the scheduler, "
Total: 3.91s
[Epoch 2] Learning rate: 1.00e-4 [1600/16000] [L1: 4.8227] 26.8+17.2s [3200/16000] [L1: 4.7801] 26.7+6.4s [4800/16000] [L1: 4.7931] 26.7+6.4s [6400/16000] [L1: 4.8106] 26.6+6.3s [8000/16000] [L1: 4.7790] 27.9+6.8s [9600/16000] [L1: 4.7551] 27.1+6.7s [11200/16000] [L1: 4.7293] 27.7+6.7s [12800/16000] [L1: 4.7155] 28.4+7.0s [14400/16000] [L1: 4.7092] 28.0+6.8s [16000/16000] [L1: 4.6938] 28.2+6.7s 0it [00:00, ?it/s] Evaluation: 0it [00:04, ?it/s] [Set5 x2] PSNR: nan (Best: nan @epoch 1) Forward: 4.93s
Hi, thank you for your interest.
Please check test dataset path.
Thanks.
On Wed, 11 Oct 2023, 4:29 pm renxiaosa00, @.***> wrote:
--template HRAN --model HRAN --scale 2 --patch_size 96 --save HRAN_x2 --ext sep_reset train result is : Making model... Preparing loss function: 1.000 * L1 [Epoch 1] Learning rate: 1.00e-4 D:\ProgramFiles\Anaconda3\envs\torch3.8\lib\site-packages\torch\optim\lr_scheduler.py:418: UserWarning: To get the last learning rate computed by the scheduler, please use get_last_lr(). warnings.warn("To get the last learning rate computed by the scheduler, " [1600/16000] [L1: 16.4000] 30.3+17.2s [3200/16000] [L1: 12.6417] 26.4+6.3s [4800/16000] [L1: 11.0019] 26.4+6.3s [6400/16000] [L1: 9.8838] 26.5+6.3s [8000/16000] [L1: 9.0486] 26.5+6.2s [9600/16000] [L1: 8.5167] 26.4+6.3s [11200/16000] [L1: 8.0545] 26.6+6.3s [12800/16000] [L1: 7.7054] 26.4+6.3s [14400/16000] [L1: 7.4325] 26.4+6.3s [16000/16000] [L1: 7.1802] 26.5+6.3s
Evaluation: 0it [00:02, ?it/s] [Set5 x2] PSNR: nan (Best: nan @epoch https://github.com/epoch 1) Forward: 2.77s
Saving... D:\ProgramFiles\Anaconda3\envs\torch3.8\lib\site-packages\torch\optim\lr_scheduler.py:418: UserWarning: To get the last learning rate computed by the scheduler, please use get_last_lr(). warnings.warn("To get the last learning rate computed by the scheduler, " Total: 3.91s
[Epoch 2] Learning rate: 1.00e-4 [1600/16000] [L1: 4.8227] 26.8+17.2s [3200/16000] [L1: 4.7801] 26.7+6.4s [4800/16000] [L1: 4.7931] 26.7+6.4s [6400/16000] [L1: 4.8106] 26.6+6.3s [8000/16000] [L1: 4.7790] 27.9+6.8s [9600/16000] [L1: 4.7551] 27.1+6.7s [11200/16000] [L1: 4.7293] 27.7+6.7s [12800/16000] [L1: 4.7155] 28.4+7.0s [14400/16000] [L1: 4.7092] 28.0+6.8s [16000/16000] [L1: 4.6938] 28.2+6.7s 0it [00:00, ?it/s] Evaluation: 0it [00:04, ?it/s] [Set5 x2] PSNR: nan (Best: nan @epoch https://github.com/epoch 1) Forward: 4.93s
— Reply to this email directly, view it on GitHub https://github.com/AbdulMoqeet/HRAN/issues/6, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABY4AUV3N7GTX7I45I4ELDDX6ZDG7ANCNFSM6AAAAAA53QNQEU . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Hi, thank you for your interest. Please check test dataset path. Thanks. … On Wed, 11 Oct 2023, 4:29 pm renxiaosa00, @.> wrote: --template HRAN --model HRAN --scale 2 --patch_size 96 --save HRAN_x2 --ext sep_reset train result is : Making model... Preparing loss function: 1.000 * L1 [Epoch 1] Learning rate: 1.00e-4 D:\ProgramFiles\Anaconda3\envs\torch3.8\lib\site-packages\torch\optim\lr_scheduler.py:418: UserWarning: To get the last learning rate computed by the scheduler, please use get_last_lr(). warnings.warn("To get the last learning rate computed by the scheduler, " [1600/16000] [L1: 16.4000] 30.3+17.2s [3200/16000] [L1: 12.6417] 26.4+6.3s [4800/16000] [L1: 11.0019] 26.4+6.3s [6400/16000] [L1: 9.8838] 26.5+6.3s [8000/16000] [L1: 9.0486] 26.5+6.2s [9600/16000] [L1: 8.5167] 26.4+6.3s [11200/16000] [L1: 8.0545] 26.6+6.3s [12800/16000] [L1: 7.7054] 26.4+6.3s [14400/16000] [L1: 7.4325] 26.4+6.3s [16000/16000] [L1: 7.1802] 26.5+6.3s Evaluation: 0it [00:02, ?it/s] [Set5 x2] PSNR: nan (Best: nan @epoch https://github.com/epoch 1) Forward: 2.77s Saving... D:\ProgramFiles\Anaconda3\envs\torch3.8\lib\site-packages\torch\optim\lr_scheduler.py:418: UserWarning: To get the last learning rate computed by the scheduler, please use get_last_lr(). warnings.warn("To get the last learning rate computed by the scheduler, " Total: 3.91s [Epoch 2] Learning rate: 1.00e-4 [1600/16000] [L1: 4.8227] 26.8+17.2s [3200/16000] [L1: 4.7801] 26.7+6.4s [4800/16000] [L1: 4.7931] 26.7+6.4s [6400/16000] [L1: 4.8106] 26.6+6.3s [8000/16000] [L1: 4.7790] 27.9+6.8s [9600/16000] [L1: 4.7551] 27.1+6.7s [11200/16000] [L1: 4.7293] 27.7+6.7s [12800/16000] [L1: 4.7155] 28.4+7.0s [14400/16000] [L1: 4.7092] 28.0+6.8s [16000/16000] [L1: 4.6938] 28.2+6.7s 0it [00:00, ?it/s] Evaluation: 0it [00:04, ?it/s] [Set5 x2] PSNR: nan (Best: nan @epoch https://github.com/epoch 1) Forward: 4.93s — Reply to this email directly, view it on GitHub <#6>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABY4AUV3N7GTX7I45I4ELDDX6ZDG7ANCNFSM6AAAAAA53QNQEU . You are receiving this because you are subscribed to this thread.Message ID: @.>
What is the form of train and test data organization?