RobGAN
RobGAN copied to clipboard
some questions about training epoch
can you tell me the training epoch number before finetuning for imagenet 143 category and cifar datasets
I remembered 10-20 epochs are just fine. But you can always wait until the accuracy does not improve.
Do you mean train 10-20 epoch for imagenet 143 category ?what about finetune epoch? i found that after finetune, the acc_fake of discrimiantor in test dataset is high and the acc_real around 0.85 is very low only around 0.03. Is there something wrong with my training?
also if found that you mention that acc under attack needs to do be done. the test_acc in your finetune.py can not be regard as your acc under attack? i don't know why my test result only around 0.03, thanks for your explanation.
also i found that the fintune _dis_real_loss can not converage
Do you mean train 10-20 epoch for imagenet 143 category ?what about finetune epoch? i found that after finetune, the acc_fake of discrimiantor in test dataset is high and the acc_real around 0.85 is very low only around 0.03. Is there something wrong with my training?
What do you mean? acc_real is only 0.03?

What's your lambda value? It seems that you are not using the real data to train. And is it adversarial training?



But this is the co-training part (step 1), not fine-tuning. For # training epochs required in ImageNet-143 data, you can refer to Figure 8 in our paper. It takes ~60 epochs in total.
i feel sorry for my understanding. i have read your readme file.
- con train with (real_adv+fake) 2.finetune(real_adv+fake_adv) does this two step includes all of your train and fintune process? I cannot figure out what did you mean finetune for real dat and when does it happen?
My understanding about your adversarial training is your first step co-train, and data augmentation is your second step finetune using fake_adv ?i
i feel sorry for my understanding. i have read your readme file.
- con train with (real_adv+fake)
Co-train is to train a good GAN model
2.finetune(real_adv+fake_adv)
Finetune is purely for robust classification. We augment the training set with fake data.
does this two step includes all of your train and fintune process? I cannot figure out what did you mean finetune for real dat and when does it happen?
Yes. Finetune on real data is just like how you train a normal classifier, in this case, we only need to keep the classification branch of the discriminator. For more information, I recommend you to read the paper and unofficial blogs (in Chinese) (https://blog.csdn.net/waple_0820/article/details/99983510, https://tiantianwahaha.github.io/2019/07/24/Rob-GAN-Generator-Discriminator-and-Adversarial-Attacker/)
This RobGan is a defence model?