Pytorch details
Hello!
I wrote in the closed issue I think it doesn't notify anymore. After the recommendations about capping the model's output, the brightness issue is gone for both cases, [0,1] range and [-1,1] range. However, for both cases results are the same after training for a long time, the resulting images look almost exactly like a model trained with regular L1 loss (blurry) and they are not getting additional details from there.
Could you point me at where should I be looking at? I'm supposing it's something around the GPLoss(), but I think it's simple enough and similar to the tensorflow version, but I'm not sure what could it be.
It hard to guess what could be going wrong in there. I think its better to first try the provided tensorflow train /test scripts on your data with out changing anything.
also check if you were infact calling both GP and CP losses and adding them in your pytorch script.
@victorca25 @ssarfraz I also try the GP and CP losses, and found that the difference between L1 loss and GP & CP losses is brightness.
trained with L1 loss
trained with GP&CP losses
may be on these images.. in the paper we show the difference on several applications. e,.g, look at the one-to-many pixel mapping problems (label-2-image), example below. even in the SR case the difference becomes apparent when you have strong unknown degradation in the low-res version.

In my SR experiments, I never achieved results much better than Zheng's either (this is still using the Pytorch version), although training did converge much faster than standard L1/L2 losses in my case when starting training from scratch. I've incorporated SPL as an optional loss function in my fork of BasicSR with some minor modifications to manipulate the configurable parameters from an external file here : https://github.com/victorca25/BasicSR/blob/master/codes/models/modules/losses/spl_loss.py . I also started testing using CPL + GPL on a style transfer problem, but started working on some other experiments before advancing much (I'll come back to it a bit later, I also think it could have more potential use there).