wide-resnet.pytorch icon indicating copy to clipboard operation
wide-resnet.pytorch copied to clipboard

Unable to reproduce the accuracy of WRN-28-10 on Cifar-100

Open wishforgood opened this issue 7 years ago • 23 comments

I git cloned the code and ran it with the command suggested by readme. However, the Top1 acc stopped at 76% after 160 epochs. I've seen the learning curve in the paper, and found that my model failed to reach 65% acc before 60 epochs. Instead, it just got around 6% lower. Could you please give some suggestion on debugging?

wishforgood avatar Dec 17 '17 09:12 wishforgood

Hi, thanks for visiting my repository.

Can I get details about your configuration, such as the meanstd value and dropout rate you've adopted during training?

That will help me a lot in looking into the problem. Thanks :)

Sincerely, Bumsoo Kim

      1. 오후 6:36에 "wishforgood" [email protected]님이 작성:

I git cloned the code and ran it with the command suggested by readme. However, the Top1 acc stopped at 76%. I've seen the learning curve in the paper, and found that my model failed to reach 65% acc before 60 epochs. Instead, it just got around 6% lower. Could you please give some suggestion on debugging?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/meliketoy/wide-resnet.pytorch/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/AQ5nm10kXfvez1TK0Ygv2KcGBd5CjBePks5tBOC4gaJpZM4REmV1 .

bmsookim avatar Dec 18 '17 08:12 bmsookim

I haven't changed the meanstd value and dropout rate. The meanstd value is (0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761). The dropout rate is 0.3. Both are just as you configured in the code. The learning rate scheme are also kept unchanged. I used a Tesla K40c to run the code. For every epoch, it took me 10 min, which is quite strange.

wishforgood avatar Dec 18 '17 08:12 wishforgood

I'll run the code tonight and try to figure out if anything is wrong within the code.

Thanks for letting me know :) I'll reply soon !

bmsookim avatar Dec 18 '17 08:12 bmsookim

That will be great, thank you very much!

wishforgood avatar Dec 18 '17 08:12 wishforgood

Hi, could you please show me your training curve?

wishforgood avatar Dec 20 '17 03:12 wishforgood

Hi, sorry for the late response.

I've tested my model out for multiple GPUs (2 Titan X's) and a single GPU (a single GTX1070) for the last two days.

To cut into the result, the best accuracy after 200 epochs reached 79.73% and 80.05% each.

If you need specific logs for the training process, I'll start training a new model right away and will upload the training log in a seperate folder.

Since you didn't have changed any configurations within the repository, I'll double check the model 5 more times with various environments. As each training takes about 15 hours (for a single GPU), it will unfortunately take some time. Will that be OK for you?

Sincerely, Bumsoo Kim

bmsookim avatar Dec 21 '17 06:12 bmsookim

Thanks very much! Really appreciate it! It's OK I can wait. I will also git clone and run it again to make sure I do follow the configuration.

wishforgood avatar Dec 21 '17 06:12 wishforgood

I have run the default configuration again and confirmed that I can't reproduce the reported acc.

wishforgood avatar Dec 25 '17 01:12 wishforgood

Hi, I've finally confirmed the result. I attached the log in the form of a text file. The final results are 80.46% in accuracy, and I think it corresponds the returned accuracy. May I see the log for your training and validation results? wide_resnet_log.txt

bmsookim avatar Dec 26 '17 07:12 bmsookim

Sorry I forgot to save my log, but I do see that before epoch 121 everything is almost the same with yours. But after that the acc just stopped at 76%. I will have to run it again to show you my log. I will check it more carefully to explore where the problem is, so could you please wait a few days? Really thank you for your log!

wishforgood avatar Dec 26 '17 07:12 wishforgood

Of course! Take your time :) I will appreciate a lot if you point out any inconsistencies or inconveniences within my code. I have a lot to learn about Pytorch, so any kind of recommendation will help me a lot.

Thanks.

bmsookim avatar Dec 26 '17 07:12 bmsookim

log.txt

wishforgood avatar Dec 29 '17 06:12 wishforgood

Hi, here is my training log. Is there any problem with it?

wishforgood avatar Dec 29 '17 06:12 wishforgood

Hi, I've looked into the log, and seems like you did everything right.

I recently am going through all the codes again since it has been a while. I also have a Torch version of this code, so I will look into everything that might go wrong and will hopefully give you an answer.

Thanks for your patience :)

bmsookim avatar Jan 03 '18 05:01 bmsookim

Hi, have you found any possible cause of this bug?

wishforgood avatar Jan 17 '18 09:01 wishforgood

Hi, I found out that the problem might be caused by the Dropout function. Currently looking into it, and it seems to show unusual fluctuations compared to the same code with Torch.

bmsookim avatar Jan 19 '18 04:01 bmsookim

Hi, sorry for the late response.

As a matter of fact, I found out that the 'Dropout' was unable to represent the performance that I've tested out in Torch.

I'm figuring this out, and will let you know as soon as I update the code.

Sorry for all the trouble. Thank you very much.

Sincerely, Bumsoo Kim

2018-01-17 18:13 GMT+09:00 wishforgood [email protected]:

Hi, have you found any possible cause of this bug?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/meliketoy/wide-resnet.pytorch/issues/1#issuecomment-358242676, or mute the thread https://github.com/notifications/unsubscribe-auth/AQ5nm6HjWJSXW4ZpcuXE3v1ZUCGtU6NVks5tLbmigaJpZM4REmV1 .

bmsookim avatar Jan 24 '18 07:01 bmsookim

It's OK, take your time.

wishforgood avatar Jan 24 '18 07:01 wishforgood

Hi Bumsoo, we're trying to get a pretrained CIFAR100 net to use for our research. Would you be willing to upload your parameters to the github? Thanks!

wronnyhuang avatar Jan 30 '18 21:01 wronnyhuang

@wishforgood Maybe you need to reduce the training batch size (in the config file) from 128 to something small,like 32, if your GPU memory is much lower than the Titan X.

morawi avatar Apr 06 '18 15:04 morawi

@wronnyhuang Will do shortly :( Sorry for everyone that its taking such a long time

bmsookim avatar Apr 11 '18 08:04 bmsookim

The problem is probably with using self.dropout the same way for both train and eval. Typically people use F.dropout in the forward function and pass self.training as an argument.

I was able to reproduce using this code that uses F.dropout: https://github.com/xternalz/WideResNet-pytorch/blob/master/wideresnet.py.

fartashf avatar Aug 02 '18 15:08 fartashf

@fartashf I agree with you. After modifying the codes related to Dropout, I got 80% for the top-1 accuracy.

hgjung3 avatar Aug 15 '18 14:08 hgjung3