XNOR-Net-PyTorch icon indicating copy to clipboard operation
XNOR-Net-PyTorch copied to clipboard

Accuracy drop using pretrained NIN model.

Open analog75 opened this issue 6 years ago • 3 comments

Hi. Thank you for your uploading. I have downloaded NIN cifar10 pretrained model and load it. Then, I run the inference evaluation. At this time. the accuracy is just only 59.36%. In this evaluation, there is no modification and the model and data are used according to Readme. But, when I run the training using pretrained model, the accuracy is very close to the best accuracy. I think that there seems to be any dependency in train and test in your code. To solve this problem, is there any trick or solution? Thanks.

image

analog75 avatar Sep 26 '19 08:09 analog75

@analog75 Hello, have you solved your problem? I also obtained a lower accuracy using NIN. Thank you

Lanweichao avatar Apr 06 '20 09:04 Lanweichao

Hi. I solve this. This code adopts random, which should be modified int pseudo-random.

In addition, batch normalization can make your accuracy lowered, which is very natural.

From: Lanweichao [email protected] Sent: Monday, April 6, 2020 6:25 PM To: jiecaoyu/XNOR-Net-PyTorch [email protected] Cc: analog75 [email protected]; Mention [email protected] Subject: Re: [jiecaoyu/XNOR-Net-PyTorch] Accuracy drop using pretrained NIN model. (#77)

@analog75 https://github.com/analog75 Hello, have you solved your problem? I also obtained a lower accuracy using NIN. Thank you

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jiecaoyu/XNOR-Net-PyTorch/issues/77#issuecomment-609679114 , or unsubscribe https://github.com/notifications/unsubscribe-auth/ANJ6W3O2OBNVFD7HQS2YSODRLGNWTANCNFSM4I2XALYQ . https://github.com/notifications/beacon/ANJ6W3OICKRPTDEP6WV7RB3RLGNWTA5CNFSM4I2XALY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOERLPOCQ.gif

analog75 avatar Apr 09 '20 02:04 analog75

Hi. Thank you for your uploading. I have downloaded NIN cifar10 pretrained model and load it. Then, I run the inference evaluation. At this time. the accuracy is just only 59.36%. In this evaluation, there is no modification and the model and data are used according to Readme. But, when I run the training using pretrained model, the accuracy is very close to the best accuracy. I think that there seems to be any dependency in train and test in your code. To solve this problem, is there any trick or solution? Thanks.

image

I had the same issue with you. Test set: Average loss: 9.3183, Accuracy: 1003/10000 (10.03%) Best Accuracy: 86.28%

How to solve this issue?

ngocqn avatar Sep 20 '21 08:09 ngocqn