TensoRF
TensoRF copied to clipboard
Wrong PSNR value by the in-place operator?
Hi, @apchenstu
Thank you for sharing this really great idea! It's very helpful for me to develop other ideas related to Radiance Fields.
By the way, I just notice that you might have wrong PSNR value by the in-place operator(+=).
You've assigned the image loss variable loss
to total_loss
, and then increase it with the +=
operator.
It will modify the original image loss loss
variable resulting in wrong PSNR; in fact, smaller than the true value.
You can check this at my colab.
Thank you, Sangmin Kim.
https://github.com/apchenstu/TensoRF/blob/17deeedae5ab4106b30a3295709ec3a8a654c7b1/train.py#L190-L198,
Hi, I think this only affects training PSNR, right?
Hi, @Derry-Xing Yes, it is not related to the quality of the output images but only for the output PSNR values. I just want to check the reported PSNR at the paper is correct.
how come it affects the PSNR, the calculation of PSNR is related to this
PSNRs.append(-10.0 * np.log(loss) / np.log(10.0))
it's about the variable loss
, instead of total_loss
, or did I miss something?
how come it affects the PSNR, the calculation of PSNR is related to this
PSNRs.append(-10.0 * np.log(loss) / np.log(10.0))
it's about the variableloss
, instead oftotal_loss
, or did I miss something?
At line 190, total loss shares the same memory with loss. Then any in-place operation on total_loss will modify the image loss term(loss in this code).
how come it affects the PSNR, the calculation of PSNR is related to this
PSNRs.append(-10.0 * np.log(loss) / np.log(10.0))
it's about the variableloss
, instead oftotal_loss
, or did I miss something?At line 190, total loss shares the same memory with loss. Then any in-place operation on total_loss will modify the image loss term(loss in this code).
thx, now I change it to
psnrloss = loss.clone().detach().item()
...
PSNRs.append(-10.0 * np.log(psnrloss ) / np.log(10.0))
is it correct now?
how come it affects the PSNR, the calculation of PSNR is related to this
PSNRs.append(-10.0 * np.log(loss) / np.log(10.0))
it's about the variableloss
, instead oftotal_loss
, or did I miss something?At line 190, total loss shares the same memory with loss. Then any in-place operation on total_loss will modify the image loss term(loss in this code).
thx, now I change it to
psnrloss = loss.clone().detach().item() ... PSNRs.append(-10.0 * np.log(psnrloss ) / np.log(10.0))
is it correct now?
Yes it seems good,
But i think you can just change the in-place operator; i.e. total_loss = total_loss + …