pytorch-randaugment
pytorch-randaugment copied to clipboard
Can not reproduce the result (both paper & yours)
I reran your code and the result seems not so good. My CIFAR_10(Wide-ResNet 28x10) result:
{
"loss_train": 0.46477456643031195,
"loss_valid": 0.0,
"loss_test": 0.10433908870220185,
"top1_train": 0.8350560897435897,
"top1_valid": 0.0,
"top1_test": 0.9663,
"top5_train": 0.971133814102564,
"top5_valid": 0.0,
"top5_test": 0.9991,
"epoch": 200
}
Can you tell me your running environment like gpu nums & gpu device.
And one more questions, why the top1_train
accuracy is so low.
I have almost the same result as yours. { "loss_train": 0.4614731064209571, "loss_valid": 0.0, "loss_test": 0.09948565063476562, "top1_train": 0.8370192307692308, "top1_valid": 0.0, "top1_test": 0.9682, "top5_train": 0.9710336538461538, "top5_valid": 0.0, "top5_test": 0.9992, "epoch": 200 } @wizcheu
I got similar results on CIFAR10.
And on CIFAR100, the accuracy also 1% below the claimed 83.3%. Anyone success?
"model": {
"type": "wresnet28_10"
},
"dataset": "cifar100",
"aug": "randaugment",
"randaug": {
"N": 2,
"M": 14
},
"cutout": 16,
"batch": 256,
"epoch": 200,
"lr": 0.1,
"lr_schedule": {
"type": "cosine",
"warmup": {
"multiplier": 2,
"epoch": 5
}
},
"optimizer": {
"type": "sgd",
"nesterov": true,
"decay": 0.0005
},
"_version": 1,
"_timestamp": "2020/11/26 15:36:12",
"config": "confs/wresnet28x10_cifar100_b256.yaml",
"tag": "",
"dataroot": "data/private/pretrainedmodels",
"save": "cifar100_wres28x10zz.pth\r",
"cv": 0,
"only": {
"eval": false
}
{ "loss_train": 1.5054243803024292, "loss_valid": 0.0, "loss_test": 0.6414522695541381, "top1_train": 0.655108173076923, "top1_valid": 0.0, "top1_test": 0.8185, "top5_train": 0.7444711538461538, "top5_valid": 0.0, "top5_test": 0.9638, "epoch": 195 }
Same here. Unable to reproduce this repo's results. Any success?