rnnoise
rnnoise copied to clipboard
Why is the activation of gru different from in you train py to the data.c ?
Why is the activation of gru different from in you train py to the data.c ?
which activation is better from your research ?please !
Can you explain what you mean?
Can you explain what you mean?
I found that the rnn_data.c shows the activation of vad_gru is relu ,but in the model inited in train.py there is a tanh activation like this
const GRULayer vad_gru = { vad_gru_bias, vad_gru_weights, vad_gru_recurrent_weights, 24, 24, ACTIVATION_RELU };
/// rnn_train.py vad_gru = GRU(24, activation='tanh', recurrent_activation='sigmoid', return_sequences=True, name='vad_gru', kernel_regularizer=regularizers.l2(reg), recurrent_regularizer=regularizers.l2(reg), kernel_constraint=constraint, recurrent_constraint=constraint, bias_constraint=constraint)(tmp)
so I wander if there is a change will you do your training.
and when I try to change che activation in train.py from tanh to relu,the loss get bigger will training . what's wrong I have done ?
denoise_gru is also different. is relu right according to paper and rnn_data.c? https://arxiv.org/pdf/1709.08243.pdf
denoise_gru is also different. is relu right according to paper and rnn_data.c? https://arxiv.org/pdf/1709.08243.pdf
with a long time training, I find out that relu is better
@jmvalin : The Activation type in rnn_train.py for GRU (noise, vad and denoise) whereas the actual rnn_data.c file in src directory says RELU. Is there a mismatch from whats in the repository and the file to generate those weights with RELU? Also, when I change the type to RELU in rnn_train.py I get the loss as NaN for most of the epochs. I appreciate your help. Thanks