pytorch-deeplab-xception icon indicating copy to clipboard operation
pytorch-deeplab-xception copied to clipboard

Per-pixel focal loss

Open RubenS02 opened this issue 5 years ago • 3 comments

Should the focal loss not be applied per pixel? The reduction occurs when computing the crossentropy loss but shouldn't it occur at the very end?

criterion = nn.CrossEntropyLoss(weight=self.weight, ignore_index=self.ignore_index,
                                    size_average=self.size_average)

logpt = -criterion(logit, target.long()) # logpt is a scalar
pt = torch.exp(logpt)
loss = -((1 - pt) ** gamma) * logpt

should maybe become

criterion = nn.CrossEntropyLoss(weight=self.weight, ignore_index=self.ignore_index,
                                    reduction="none") 

logpt = -criterion(logit, target.long()) # logpt is a tensor
pt = torch.exp(logpt)
loss = -((1 - pt) ** gamma) * logpt
if self.size_average:
    loss = loss.mean()

RubenS02 avatar Nov 15 '19 16:11 RubenS02

agree with @RubenS02

liuwenran avatar Dec 17 '19 13:12 liuwenran

hi, i'm also curious about this issue. and want to know if this adjustment work successfully?

ycc66104116 avatar May 31 '22 07:05 ycc66104116

Hi @ycc66104116 , i did not test it, this repo is obsolete now you can find much better models

RubenS02 avatar Jul 27 '23 19:07 RubenS02