LD icon indicating copy to clipboard operation
LD copied to clipboard

kd_loss implementation issue

Open ZaberKo opened this issue 1 year ago • 9 comments

Hello, I found that the knowledge_distillation_kl_div_loss() in mmdet/models/losses/kd_loss.py uses a different implementation compared to the normal KL Div definition, which is equivalent to F.kl_div(reduction='mean') instead of F.kl_div(reduction='batchmean') as mentioned in F.kl_div.

kd_loss = F.kl_div(
    F.log_softmax(pred / T, dim=1), target, reduction='none').mean(1) * (
        T * T)

The correct KL Div should be like

kd_loss = F.kl_div(
    F.log_softmax(pred / T, dim=1), target, reduction='none').sum(1) * (
        T * T)

Is there any reason to use the above implementation? Current kl_div is 1/17 smaller than the real kl_div, when gfl reg_max=16.

ZaberKo avatar Oct 07 '23 18:10 ZaberKo

I remeber that .mean(1) is equal to reduction='batch_mean‘ ?

HikariTJU avatar Oct 08 '23 02:10 HikariTJU

I remeber that .mean(1) is equal to reduction='batch_mean‘ ?

Here is the source code of F.kl_div: https://github.com/pytorch/pytorch/blob/defa0d3a2d230e5d731d5c443c1b9beda2e7fd93/torch/nn/functional.py#L2949-L2958

And the problem here is that the kd_loss is subsequently averaged by @weighted_loss wrapper.

ZaberKo avatar Oct 08 '23 02:10 ZaberKo

So batch_mean equals .mean(0)?

HikariTJU avatar Oct 08 '23 02:10 HikariTJU

So batch_mean equals .mean(0)?

No. "batchmean" means .sum()/batch_size, i.e., .sum(1).mean()

ZaberKo avatar Oct 08 '23 03:10 ZaberKo

OK, I get your point, you mean mathmatically .sum(1) is the correct implementation and .mean(1)=.sum(1)/16 That's true, but how is it related to batchmean?

HikariTJU avatar Oct 08 '23 03:10 HikariTJU

OK, I get your point, you mean mathmatically .sum(1) is the correct implementation and .mean(1)=.sum(1)/16 That's true, but how is it related to batchmean?

BTW, I also found that loss_ld used weighted sum and was not divided by avg_factor (i.e. sum of weights). Is this a typo or intended behavior for not using normalization?

ZaberKo avatar Oct 14 '23 09:10 ZaberKo

FYI: I record the factor ratio avg_factor/(self.reg_max+1) during the training. Maybe it will help this discussion.

image

ZaberKo avatar Oct 14 '23 09:10 ZaberKo

It's a intended behavior because experiment shows not dividing is better. Don't know the theory behind this though

HikariTJU avatar Oct 14 '23 11:10 HikariTJU

It's a intended behavior because experiment shows not dividing is better. Don't know the theory behind this though

I see, thanks for the reply.

ZaberKo avatar Oct 14 '23 14:10 ZaberKo