pytorch-be-your-own-teacher
pytorch-be-your-own-teacher copied to clipboard
Question for your paper MSD
Hi, Thanks for sharing this code and it's really helpful.
Recently I read your paper:"MSD: Multi-Self-Distillation Learning via Multi-classifiers within Deep Neural Networks".It's a very interesting work and the results are much better than the paper "be your own teacher" which you reimplement here.
However,after reading your paper,I could just find some slight differences between this two papers: 1.the differences of bottleneck in the model. 2.some changes of hyper-parameter.
Is there some important details that I missed?And could you please tell me about the key difference between the two papers that lead to such a significant improvement?
MSD is a bi-KD or mutual learning framework while the 'be your own teacher' is a one-way knowledge distillation method
Hi, Thanks for sharing this code and it's really helpful.
Recently I read your paper:"MSD: Multi-Self-Distillation Learning via Multi-classifiers within Deep Neural Networks".It's a very interesting work and the results are much better than the paper "be your own teacher" which you reimplement here.
However,after reading your paper,I could just find some slight differences between this two papers: 1.the differences of bottleneck in the model. 2.some changes of hyper-parameter.
Is there some important details that I missed?And could you please tell me about the key difference between the two papers that lead to such a significant improvement?
MSD is a bi-KD or mutual learning framework while the 'be your own teacher' is a one-way knowledge distillation method
hi, Recently I read the paper "MSD" and can you provide code about this paper? thanks