IndRNN_pytorch
IndRNN_pytorch copied to clipboard
issue on grad
Hi, thanks for your great work! But I have encountered something wrong while reproducing the code.
During "train", the grad of the most layers is "None". Except layers "classify_weight"、"classify_bias"、"RNN5_weight"、"RNN5_bias" have not-None grad, others have grad which are None. As a result, error happens when running to "grad_climp", as showed in the following figure.
Maybe something goes wrong with layer "RNN5_weight_hh" during loss.backward() I think.
I wonder how to address this problem. Looking forward to the reply, thank you!