PerceptualSimilarity icon indicating copy to clipboard operation
PerceptualSimilarity copied to clipboard

LPIPS Loss producing negative values

Open GuillaumeRochette opened this issue 4 years ago • 7 comments

Hi,

While running the LPIPS loss based on AlexNet, I obtained a negative value,

a = LPIPS(net="alex", verbose=False)
x = torch.rand(4, 3, 256, 256)
y = torch.rand(4, 3, 256, 256)
z = a(x, y, normalize=True)
print(z)

While looking at the values contained in res (defined in the forward()), I have noticed that the implementation does not match the Eq. 1 from the paper.

Here's Eq. 1: image

While this is what is implemented, image

The square operation ** 2 at line 94 should be removed and instead applied on the self.lins[kk].model(diffs[kk]) (at lines 98 and 100), and on diff[kk] (at lines 103 and 105).

Thanks in advance,

Guillaume

GuillaumeRochette avatar Jul 10 '21 10:07 GuillaumeRochette

Is there a good workaround for this?

markdjwilliams avatar Dec 08 '22 19:12 markdjwilliams

If the code is installed and the weights are loaded properly (and weren't changed by accidentally fine-tuning them, for example), it is not possible to get negative values.

Check the weights at all non-negative, by doing the following

for ll in range(5):
    print(loss_fn_vgg.lins[ll].model[1].weight.flatten())

richzhang avatar Dec 08 '22 20:12 richzhang

Thank you, this makes perfect sense.

markdjwilliams avatar Dec 23 '22 01:12 markdjwilliams