CycleGAN-PyTorch
CycleGAN-PyTorch copied to clipboard
"The model is not compiled" when trying to reasume training
Traceback (most recent call last):
File "...CycleGAN-PyTorch/train.py", line 594, in <module>
main()
File ".../CycleGAN-PyTorch/train.py", line 104, in main
g_A_model, ema_g_A_model, start_epoch, g_optimizer, g_scheduler = load_resume_state_dict(
^^^^^^^^^^^^^^^^^^^^^^^
File ".../CycleGAN-PyTorch/utils.py", line 137, in load_resume_state_dict
model = load_state_dict(model, compile_state, state_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../CycleGAN-PyTorch/utils.py", line 60, in load_state_dict
raise RuntimeError("The model is not compiled. Please use `model = torch.compile(model)`.")
RuntimeError: The model is not compiled. Please use `model = torch.compile(model)`.
Config:
LOAD_RESUME: True
RESUME_G_A_MODEL_WEIGHTS_PATH: ./results/CycleGAN-apple2orange/g_A_best.pth.tar
RESUME_G_B_MODEL_WEIGHTS_PATH: ./results/CycleGAN-apple2orange/g_B_best.pth.tar
RESUME_D_A_MODEL_WEIGHTS_PATH: ./results/CycleGAN-apple2orange/d_A_best.pth.tar
RESUME_D_B_MODEL_WEIGHTS_PATH: ./results/CycleGAN-apple2orange/d_B_best.pth.tar
Oh, I found this problem. Because the _orig_mod
keyword in the model state dict.
You can use a temporary solution, using torch.comple(model)