LPIPS Loss producing negative values
Hi,
While running the LPIPS loss based on AlexNet, I obtained a negative value,
a = LPIPS(net="alex", verbose=False)
x = torch.rand(4, 3, 256, 256)
y = torch.rand(4, 3, 256, 256)
z = a(x, y, normalize=True)
print(z)
While looking at the values contained in res (defined in the forward()), I have noticed that the implementation does not match the Eq. 1 from the paper.
Here's Eq. 1:

While this is what is implemented,

The square operation ** 2 at line 94 should be removed and instead applied on the self.lins[kk].model(diffs[kk]) (at lines 98 and 100), and on diff[kk] (at lines 103 and 105).
Thanks in advance,
Guillaume
Is there a good workaround for this?
If the code is installed and the weights are loaded properly (and weren't changed by accidentally fine-tuning them, for example), it is not possible to get negative values.
Check the weights at all non-negative, by doing the following
for ll in range(5):
print(loss_fn_vgg.lins[ll].model[1].weight.flatten())
Thank you, this makes perfect sense.