fast-autoaugment
fast-autoaugment copied to clipboard
Why can the algorithm work?
Intuitively, the Optimizer can choose NO Augment to achieve higher validate accuracy. Why can it work? Looking forward to your answer
Intuitively, the Optimizer can choose NO Augment to achieve higher validate accuracy. Why can it work? Looking forward to your answer
I'm having the same concern. I hope the authors could clarify it.
Mee, too . What if there exist a most powerful Optimizer , the code's endpoint must be stay with no any Augment I think .
In my opinion, this is why the author devides training set into k-folds, just to keep the density gap between Dm and Da. In other words, if Dm and Da belong to the same distribution strictly, any optimization under this objective will get NONE augmentation.