RobGAN icon indicating copy to clipboard operation
RobGAN copied to clipboard

some questions about training epoch

Open muyouyang opened this issue 5 years ago • 13 comments

can you tell me the training epoch number before finetuning for imagenet 143 category and cifar datasets

muyouyang avatar Dec 16 '19 03:12 muyouyang

I remembered 10-20 epochs are just fine. But you can always wait until the accuracy does not improve.

xuanqing94 avatar Dec 16 '19 22:12 xuanqing94

Do you mean train 10-20 epoch for imagenet 143 category ?what about finetune epoch? i found that after finetune, the acc_fake of discrimiantor in test dataset is high and the acc_real around 0.85 is very low only around 0.03. Is there something wrong with my training?

muyouyang avatar Dec 17 '19 00:12 muyouyang

also if found that you mention that acc under attack needs to do be done. the test_acc in your finetune.py can not be regard as your acc under attack? i don't know why my test result only around 0.03, thanks for your explanation.

muyouyang avatar Dec 17 '19 00:12 muyouyang

also i found that the fintune _dis_real_loss can not converage

muyouyang avatar Dec 17 '19 00:12 muyouyang

Do you mean train 10-20 epoch for imagenet 143 category ?what about finetune epoch? i found that after finetune, the acc_fake of discrimiantor in test dataset is high and the acc_real around 0.85 is very low only around 0.03. Is there something wrong with my training?

What do you mean? acc_real is only 0.03?

xuanqing94 avatar Dec 17 '19 00:12 xuanqing94

Screen Shot 2019-12-17 at 09 50 30 this is the finetune.txt

muyouyang avatar Dec 17 '19 01:12 muyouyang

What's your lambda value? It seems that you are not using the real data to train. And is it adversarial training?

xuanqing94 avatar Dec 17 '19 07:12 xuanqing94

Screen Shot 2019-12-17 at 23 22 40 i found in your training code, i once thought you intentionally only training adversarial examples. Do you mean add this in train.py. ori means real picture without adversarial training Screen Shot 2019-12-17 at 23 44 17 Screen Shot 2019-12-17 at 23 44 13

muyouyang avatar Dec 17 '19 16:12 muyouyang

But this is the co-training part (step 1), not fine-tuning. For # training epochs required in ImageNet-143 data, you can refer to Figure 8 in our paper. It takes ~60 epochs in total.

xuanqing94 avatar Dec 17 '19 19:12 xuanqing94

i feel sorry for my understanding. i have read your readme file.

  1. con train with (real_adv+fake) 2.finetune(real_adv+fake_adv) does this two step includes all of your train and fintune process? I cannot figure out what did you mean finetune for real dat and when does it happen?

muyouyang avatar Dec 18 '19 01:12 muyouyang

My understanding about your adversarial training is your first step co-train, and data augmentation is your second step finetune using fake_adv ?i

muyouyang avatar Dec 18 '19 01:12 muyouyang

i feel sorry for my understanding. i have read your readme file.

  1. con train with (real_adv+fake)

Co-train is to train a good GAN model

2.finetune(real_adv+fake_adv)

Finetune is purely for robust classification. We augment the training set with fake data.

does this two step includes all of your train and fintune process? I cannot figure out what did you mean finetune for real dat and when does it happen?

Yes. Finetune on real data is just like how you train a normal classifier, in this case, we only need to keep the classification branch of the discriminator. For more information, I recommend you to read the paper and unofficial blogs (in Chinese) (https://blog.csdn.net/waple_0820/article/details/99983510, https://tiantianwahaha.github.io/2019/07/24/Rob-GAN-Generator-Discriminator-and-Adversarial-Attacker/)

xuanqing94 avatar Dec 19 '19 05:12 xuanqing94

This RobGan is a defence model?

1996-jb avatar Apr 14 '21 06:04 1996-jb