learning-invariances
learning-invariances copied to clipboard
Some questions about the paper
Dear authors,
I have some questions about the experiment results that whether the proposed algorithm can really provide a substantial contribution (i.e., improve over fixed augmentation baseline). My questions are as follows:
- The
n_copies
only set to 1 for training and 4 for testing. I am curious that why the paper doesn't give further ablation study about increasingn_copies
for training? (I know as described in the paper that usingn_copies = 1
for training can obtain strong (?) result, but I suspect that increasingn_copies
for training do not really help a lot). - The experiment of Cifar-10 shows that testing with only 1 copies does not improve over the baseline (fixed aug.), which is quite disappointing. I thought the proposed learned augmentation distribution will be better than the fixed one. Is there any comment on this?
- Although testing with 4 copies from learned augmentation can improve the baseline (fixed aug.), the experiment lacks of 4 copies from fixed augmentation. I doubt that simply perform 4 fixed augmentations can also obtain strong results. Is there any comment on this?