SwapNet
SwapNet copied to clipboard
Training Warp stage stops at epoch 3
Hi,
I ran the train.py for training warp stage twice. (python train.py --name deep_fashion/warp --model warp --dataroot data/deep_fashion) However, the training does not proceed beyond epoch 3. Could you help me with this issue? I have attached screenshots for reference.
Hi! Sorry I'm not sure what the issue is. I haven't encountered this before.
One thing I could notice is that when loss values are exactly same, the execution freezes. Have you used callbacks or something which stops the training? I went through the code, but could not find such things.
One thing I could notice is that when loss values are exactly same, the execution freezes. Have you used callbacks or something which stops the training? I went through the code, but could not find such things.
The loss values are the same may come from the printing format for %.3f. Please try in visualizer.py | line 242 message+ = '%s: %.3f ' --> if you increase to %.6f it may show the difference between iterations?
For your issues, may be you can debug and check the values of: opt.start_epoch + 1, opt.n_epochs + 1