densenet.pytorch icon indicating copy to clipboard operation
densenet.pytorch copied to clipboard

Help needed on reproducing the performance on Cifar-100

Open wishforgood opened this issue 7 years ago • 3 comments

I used the default setting(which I think is Densenet-12-BC with data augmentation) on cifar-100(via just changing the name of dataset class and the nClasses variable). The training curve looks like this: image Though the training has not ended yet, from training curves for other networks on Cifar-100 I can tell there would be no more major changes in acc. The highest of acc for now is 75.59%, which can only match the reported performance of Densenet-12(depth 40) with data augmentation. Has any one tested this repo on Cifar-100 yet?

wishforgood avatar Dec 20 '17 02:12 wishforgood

image No changes in the end.

wishforgood avatar Dec 20 '17 06:12 wishforgood

Hi @wishforgood I have tried another reimplementation and meet the same problem. The error rate on CIFAR10 with densenet40-nonBC is only 6.0% (5.24% in the repo) but when I test it on Tensorflow it is about 5.4%. I think it is caused by Pytorch instead of the model Have you solved that yet?

ZhenyF avatar Jun 14 '18 23:06 ZhenyF

Not yet, at last I decided to try other models like wide-resnet.

wishforgood avatar Jun 15 '18 01:06 wishforgood