DAN
DAN copied to clipboard
group["initial_lr"]: unsupported operand type(s) for *: 'NoneType' and 'int'
Traceback (most recent call last): File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 349, in <module> main() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 207, in main model = create_model(opt) # load pretrained model of SFTMD File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\__init__.py", line 17, in create_model m = M(opt) File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\blind_model.py", line 90, in __init__ lr_scheduler.MultiStepLR_Restart( File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 27, in __init__ super(MultiStepLR_Restart, self).__init__(optimizer, last_epoch) File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 77, in __init__ self.step() File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 154, in step values = self.get_lr() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 34, in get_lr return [ File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 35, in <listcomp> group["initial_lr"] * weight for group in self.optimizer.param_groups TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
PyTorch 1.11 and 1.5. tried both.
The training .yml file don't set the learning rate of the estimater and you need add it into the .yml file, which is the reason of the 'NoneType'.
Then how to set?
Traceback (most recent call last): File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 349, in <module> main() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 207, in main model = create_model(opt) # load pretrained model of SFTMD File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\__init__.py", line 17, in create_model m = M(opt) File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\blind_model.py", line 90, in __init__ lr_scheduler.MultiStepLR_Restart( File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 27, in __init__ super(MultiStepLR_Restart, self).__init__(optimizer, last_epoch) File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 77, in __init__ self.step() File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 154, in step values = self.get_lr() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 34, in get_lr return [ File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 35, in <listcomp> group["initial_lr"] * weight for group in self.optimizer.param_groups TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
PyTorch 1.11 and 1.5. tried both.
Hello, do you solve it? Can you help me?
The training .yml file don't set the learning rate of the estimater and you need add it into the .yml file, which is the reason of the 'NoneType'.
Sorry, but it can't be solved
Hello, You must assign value for "lr_E" in train section of .yml file (e.g. lr_E: !!float 4e-4
)