anomalib
anomalib copied to clipboard
Getting the negative loss in fastflow
Epoch 0: 67%|██████▋ | 2/3 [00:14<00:07, 7.09s/it, loss=2.14e+05, train_loss_step=1.94e+5] Validation: 0it [00:00, ?it/s] Validation: 0%| | 0/1 [00:00<?, ?it/s] Validation DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s] Epoch 0: 100%|██████████| 3/3 [00:22<00:00, 7.52s/it, loss=2.14e+05, train_loss_step=1.94e+5, pixel_F1Score=0.163, pixel_AUROC=0.966] Epoch 1: 67%|██████▋ | 2/3 [00:15<00:07, 7.59s/it, loss=1.84e+05, train_loss_step=1.4e+5, pixel_F1Score=0.163, pixel_AUROC=0.966, train_loss_epoch=2.19e+5] Validation: 0it [00:00, ?it/s] Validation: 0%| | 0/1 [00:00<?, ?it/s] Validation DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s] Epoch 1: 100%|██████████| 3/3 [00:24<00:00, 8.13s/it, loss=1.84e+05, train_loss_step=1.4e+5, pixel_F1Score=0.184, pixel_AUROC=0.972, train_loss_epoch=2.19e+5] Epoch 2: 67%|██████▋ | 2/3 [00:15<00:07, 7.67s/it, loss=1.58e+05, train_loss_step=9.02e+4, pixel_F1Score=0.184, pixel_AUROC=0.972, train_loss_epoch=1.56e+5] Validation: 0it [00:00, ?it/s] Validation: 0%| | 0/1 [00:00<?, ?it/s] Validation DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s] Epoch 2: 100%|██████████| 3/3 [00:23<00:00, 7.93s/it, loss=1.58e+05, train_loss_step=9.02e+4, pixel_F1Score=0.194, pixel_AUROC=0.973, train_loss_epoch=1.56e+5] Epoch 3: 67%|██████▋ | 2/3 [00:15<00:07, 7.51s/it, loss=1.33e+05, train_loss_step=4.77e+4, pixel_F1Score=0.194, pixel_AUROC=0.973, train_loss_epoch=1.09e+5] Validation: 0it [00:00, ?it/s] Validation: 0%| | 0/1 [00:00<?, ?it/s] Validation DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s] Epoch 3: 100%|██████████| 3/3 [00:23<00:00, 7.87s/it, loss=1.33e+05, train_loss_step=4.77e+4, pixel_F1Score=0.205, pixel_AUROC=0.974, train_loss_epoch=1.09e+5] Epoch 4: 67%|██████▋ | 2/3 [00:14<00:07, 7.41s/it, loss=1.11e+05, train_loss_step=1.51e+4, pixel_F1Score=0.205, pixel_AUROC=0.974, train_loss_epoch=6.39e+4] Validation: 0it [00:00, ?it/s] Validation: 0%| | 0/1 [00:00<?, ?it/s] Validation DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s] Epoch 4: 100%|██████████| 3/3 [00:23<00:00, 7.90s/it, loss=1.11e+05, train_loss_step=1.51e+4, pixel_F1Score=0.206, pixel_AUROC=0.973, train_loss_epoch=6.39e+4] Epoch 5: 67%|██████▋ | 2/3 [00:15<00:07, 7.66s/it, loss=9.02e+04, train_loss_step=-2.12e+4, pixel_F1Score=0.206, pixel_AUROC=0.973, train_loss_epoch=2.4e+4] Validation: 0it [00:00, ?it/s] Validation: 0%| | 0/1 [00:00<?, ?it/s] Validation DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s] Epoch 5: 100%|██████████| 3/3 [00:24<00:00, 8.20s/it, loss=9.02e+04, train_loss_step=-2.12e+4, pixel_F1Score=0.205, pixel_AUROC=0.972, train_loss_epoch=2.4e+4] Epoch 6: 67%|██████▋ | 2/3 [00:15<00:07, 7.77s/it, loss=7.06e+04, train_loss_step=-5.37e+4, pixel_F1Score=0.205, pixel_AUROC=0.972, train_loss_epoch=-1.29e+4] Validation: 0it [00:00, ?it/s] Validation: 0%| | 0/1 [00:00<?, ?it/s] Validation DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s] Epoch 6: 100%|██████████| 3/3 [00:24<00:00, 8.30s/it, loss=7.06e+04, train_loss_step=-5.37e+4, pixel_F1Score=0.198, pixel_AUROC=0.970, train_loss_epoch=-1.29e+4] Epoch 6: 100%|██████████| 3/3 [00:24<00:00, 8.30s/it, loss=7.06e+04, train_loss_step=-5.37e+4, pixel_F1Score=0.198, pixel_AUROC=0.970, train_loss_epoch=-4.54e+4]
I also noticed anomaly_maps
with negative values using patchcore.
Looking at the loss and the anomaly map code, they both use hidden_variables
as input.
https://github.com/openvinotoolkit/anomalib/blob/52bbb0b2417461ff097a819a06cc7ac3ff408149/src/anomalib/models/fastflow/loss.py#L12
loss = torch.tensor(0.0, device=hidden_variables[0].device) # pylint: disable=not-callable
for hidden_variable, jacobian in zip(hidden_variables, jacobians):
loss += torch.mean(0.5 * torch.sum(hidden_variable**2, dim=(1, 2, 3)) - jacobian)
return loss
https://github.com/openvinotoolkit/anomalib/blob/52bbb0b2417461ff097a819a06cc7ac3ff408149/src/anomalib/models/fastflow/anomaly_map.py#L21
flow_maps: list[Tensor] = []
for hidden_variable in hidden_variables:
log_prob = -torch.mean(hidden_variable**2, dim=1, keepdim=True) * 0.5
prob = torch.exp(log_prob)
flow_map = F.interpolate(
input=-prob,
size=self.input_size,
mode="bilinear",
align_corners=False,
)
flow_maps.append(flow_map)
flow_maps = torch.stack(flow_maps, dim=-1)
anomaly_map = torch.mean(flow_maps, dim=-1)
I guess the loss could be negative because of the - jacobian
? Or am i missing some (assumed) property of the jacobian and the other term?
However I don't see why there is that - prob
(instead of just prob
) in the anomaly map interpolation, which seems to match the original implementation:
https://github.com/gathierry/FastFlow/blob/2cf1f2f4c562a7f13cfb1959e3afe5df2f2d2565/fastflow.py#L145
anomaly_map_list = []
for output in outputs:
log_prob = -torch.mean(output**2, dim=1, keepdim=True) * 0.5
prob = torch.exp(log_prob)
a_map = F.interpolate(
-prob,
size=[self.input_size, self.input_size],
mode="bilinear",
align_corners=False,
)
anomaly_map_list.append(a_map)
anomaly_map_list = torch.stack(anomaly_map_list, dim=-1)
anomaly_map = torch.mean(anomaly_map_list, dim=-1)
ret["anomaly_map"] = anomaly_map
I've encountered an issue where FastFlow outputs negative loss values. Since the loss is essentially the "Negative Log-likelihood," its value should be positive. I suspect that the precise computation in the Normalizing Flow is not completely invertible(May be invertible1x1conv?).
Interestingly, while NFs produce positive values, the performance of anomaly detection remains impressively high.