MPRNet
MPRNet copied to clipboard
Loss raise to abnormal and batchsize
Loss raises to several million after 50 epochs (Before 50 epoch is normal). And why I can only allow batchsize 2 on RTX3090 when training, 2 more will out of memory.
I have the same problem. The device I used is the RTX 3090ti. After 200 epochs, both the char loss and edge loss grow graduallty.
I'm in the same situation as you. How can I solve it?
我和你情况一样。我该如何解决? clipping the gradient,
torch.nn.utils.clip_grad_norm_(self.net.parameters(), 0.01)
Could you tell me where to put this code?
Could you tell me where to put this code?
loss.backward()
torch.nn.utils.clip_grad_norm_(model_restoration.parameters(), 0.01)
optimizer.step()