pytorch-deeplab-xception
pytorch-deeplab-xception copied to clipboard
Per-pixel focal loss
Should the focal loss not be applied per pixel? The reduction occurs when computing the crossentropy loss but shouldn't it occur at the very end?
criterion = nn.CrossEntropyLoss(weight=self.weight, ignore_index=self.ignore_index,
size_average=self.size_average)
logpt = -criterion(logit, target.long()) # logpt is a scalar
pt = torch.exp(logpt)
loss = -((1 - pt) ** gamma) * logpt
should maybe become
criterion = nn.CrossEntropyLoss(weight=self.weight, ignore_index=self.ignore_index,
reduction="none")
logpt = -criterion(logit, target.long()) # logpt is a tensor
pt = torch.exp(logpt)
loss = -((1 - pt) ** gamma) * logpt
if self.size_average:
loss = loss.mean()
agree with @RubenS02
hi, i'm also curious about this issue. and want to know if this adjustment work successfully?
Hi @ycc66104116 , i did not test it, this repo is obsolete now you can find much better models