attention-transfer icon indicating copy to clipboard operation
attention-transfer copied to clipboard

KL div v/s xentropy

Open arunmallya opened this issue 7 years ago • 2 comments

The Hinton distillation paper states: "The first objective function is the cross entropy with the soft targets and this cross entropy is computed using the same high temperature in the softmax of the distilled model as was used for generating the soft targets from the cumbersome model. The second objective function is the cross entropy with the correct labels."

In https://github.com/szagoruyko/attention-transfer/blob/master/utils.py#L13-L15, the first objective function is computed using kl_div which is different from cross_entropy. kl_div computes (- \sum t log(x/t)) cross_entropy computes (- \sum t log(x)) In general, cross_entropy is kl_div + entropy(t)

Did I misunderstand something, or did you use a slightly different loss in your implementation?

arunmallya avatar Oct 31 '17 22:10 arunmallya

Technically, the loss would be different, but the gradients would be correct as entropy(t) doesn't contribute to gradient w.r.t x.

arunmallya avatar Oct 31 '17 22:10 arunmallya

So long as the targets prob is not changing. These two losses are equalvalent.

DanteLuo avatar Jul 16 '18 08:07 DanteLuo