EmotionRecTraining icon indicating copy to clipboard operation
EmotionRecTraining copied to clipboard

How to avoid overfitting?

Open zzydog1982 opened this issue 7 years ago • 3 comments

Hi. I'm interested in FER. In your experiment, do you split the dataset into training/validation/testing set? How to avoid overfitting? Thanks!

zzydog1982 avatar May 24 '18 14:05 zzydog1982

Hi, The data actually comes labeled with "test" and "training" so I use those two sets to train and validate. There are a couple things you could do to prevent overfitting.

The first thing I would try is to bump the drop out or add more drop out (There is a drop out applied to the last dense/fc layers).

Next, I would try stopping the training a little earlier. You could log the "training error" and the "test error" during training and pick out the place (n_epochs) where the test/validation set has a low error.

There are other techniques like gathering more data or changing the architecture that would be slightly more difficult to accomplish. This article goes over some of those: https://elitedatascience.com/overfitting-in-machine-learning

JsFlo avatar May 26 '18 15:05 JsFlo

Thanks a lot. The data in the .csv are labeled with "private test" and "public test". If I can use the date with "private test " as my validation set, and use the other as test set?

zzydog1982 avatar May 27 '18 13:05 zzydog1982

Yea I used both "public/private test" as my validation set but you should probably separate them like that if you're going to be trying to reduce overfitting/modifying hyperparameters

JsFlo avatar May 30 '18 15:05 JsFlo