knowledge-distillation-pytorch icon indicating copy to clipboard operation
knowledge-distillation-pytorch copied to clipboard

experiment result

Open MrLinNing opened this issue 6 years ago • 17 comments

Hello peterliht, I ran through your code according to the instructions, did not modify any parameters, but found that the results vary greatly. What parameters did you modify before releasing the code? The following experimental results on resnet18: python train.py --model_dir experiments/resnet18_distill/resnext_teacher

My experimental environment is:

python 3.5.2
pytorch 0.4.0
GPU  TITAN Xp

image

image

image

image

MrLinNing avatar Jul 30 '18 09:07 MrLinNing

me too. A huge gap between my experiment and the author's report.

xht033 avatar Aug 28 '18 03:08 xht033

@MrLinNing Do you get the experiment result closed to the author's result?

ChuangbinC avatar Sep 28 '18 06:09 ChuangbinC

I just ran an experiment on CIFAR-10, with the student being a simple LeNet-5 like network (64C - MP - 128C - MP - 400FC-10), and the teacher is a deeper version (128C-128C-MP-128C-128C-MP-128C-128C-512FC-10).

The teacher gets to ~93% accuracy, the student without KL is ~86.5%. With KL, the student gets to 87.5% consistently.

I didn't use this repo code, only copied the KL loss function to my code.

michaelklachko avatar Sep 29 '18 23:09 michaelklachko

I found a 10% gap too, 84% nowhere near the expected 94.788% . Student net: Resnet-18, Teacher net: Resnext29. Parameters are the same with @peterliht 's original settings.

xiaowenmasfather avatar Mar 31 '19 08:03 xiaowenmasfather

I also got similar results. train_set: 84.914%, test_set: 83.89% The teacher model is derived from the author's pretrained_teacher_models.zip\pretrained_teacher_models\base_resnext29\. The accuracy of testing this model is: train_set: 100%, test_set: 96.23%. Other parameters are consistent with the author.

wnma3mz avatar Aug 14 '19 08:08 wnma3mz

Looking through another thread of issue discussions on the data loader, the accuracy inconsistency might be due to the way how student and teacher models got their data when we used shuffling.

haitongli avatar Aug 23 '19 22:08 haitongli

Has anyone used pytorch 0.3 to run and test?

haitongli avatar Aug 23 '19 22:08 haitongli

@peterliht why would you want to use Pytorch 0.3? The current stable version is 1.2.

@wnma3mz @xiaowenmasfather Resnet-18 should get to 94.0% without any teachers. If that's not the case, then you're doing something wrong.

michaelklachko avatar Aug 23 '19 23:08 michaelklachko

@peterliht why would you want to use Pytorch 0.3? The current stable version is 1.2.

@wnma3mz @xiaowenmasfather Resnet-18 should get to 94.0% without any teachers. If that's not the case, then you're doing something wrong.

I understand there is newer (and more stable) version of pytorch available. I just wanted to understand if people have seen different results across different pytorch versions. When first creating this repo 2 years ago, as specified in requirements.txt, v0.3 was used. I want to get a better understanding of issues that have prevented people from reproducing results and see if fixes can be done along with the most stable pytorch version.

haitongli avatar Aug 23 '19 23:08 haitongli

Hi @michaelklachko You‘re right. Resnet-18 with the author's hyperparameters can indeed reach 94%. So my doubt is, where is the problem? Has anyone encountered the same problem and helped me?

@peterliht Thanks for your suggestion, I will try it on version 0.3 later.

wnma3mz avatar Aug 24 '19 00:08 wnma3mz

@wnma3mz another thread might also be worth looking into @ #9 and also @ #4

haitongli avatar Aug 24 '19 00:08 haitongli

@peterliht Thank you for your prompt reply. I have already seen this issue, I have changed code according to this comment to ensure the correctness of the distillation.

wnma3mz avatar Aug 24 '19 00:08 wnma3mz

@wnma3mz another thread might also be worth looking into @ #9 and also @ #4

I compare the max index of teacher's output with label, these two disagree with each other. I have commit a request to fix this issue.

forjiuzhou avatar Sep 12 '19 14:09 forjiuzhou

I met the same problem of accuracy gap. I have tried adjusting the learning rate to a small one and observed an improvement, making my results close to those of Peterliht. You can try changing the learning rate and running the code again.

conditionWang avatar Jul 21 '20 01:07 conditionWang

@wnma3mz another thread might also be worth looking into @ #9 and also @ #4

I compare the max index of teacher's output with label, these two disagree with each other. I have commit a request to fix this issue.

Your request (https://github.com/peterliht/knowledge-distillation-pytorch/pull/17) fix the problem and I am getting much improved result. I wonder why it is not merged into the master yet!

tianli avatar Jan 19 '21 07:01 tianli

FYI, with the pull request #17, I was able to get accuracy 95.19% on reset18 with the resnext29 teacher.

tianli avatar Jan 19 '21 16:01 tianli

Thanks for all the discussions and the reminder from @tianli about the pull request. I haven't been able to keep track of this repo for a while. #17 has been merged.

haitongli avatar Jan 22 '21 23:01 haitongli