ganhacks icon indicating copy to clipboard operation
ganhacks copied to clipboard

Activation functions for training and testing phases

Open SITUSITU opened this issue 2 years ago • 1 comments

Do we also have to scale the labels to [-1, 1] and calculate the loss while using tanh activation function in the training phase?

If my task is to generate images (labels in [0, 800]), how can I get predicted outputs in [0, 800] during the testing phase ?

SITUSITU avatar Jan 13 '23 08:01 SITUSITU

Given a label y you can normalize it as $\tilde{y} = \frac{y}{400} - 1 \in [-1,1]$, this way you can train your network to predict the normalized labels, in inference time just do the inverse operation, so prediction = (model_output + 1) * 400.

Alternatively, you can use an activation that is f(x) = (tanh(x) + 1) * 400 while training\testing, but notice that this scales your gradients by 400, so you would need to scale your learning rate to account for this.

On another note, if really want to predict labels, you should not use a 1D output. You should predict a 800D output, which is a score indicator for each label, using a single dimension introduces an ordering and geometry to the class space that you probably do not want.

sarihl avatar Jun 15 '23 07:06 sarihl