BrunoQin

Results 2 comments of BrunoQin

@N-Kingsley Hi, 1) I think the parameters of each layer in attention layers are the same, but the data you pass to attention layers is different, such as `features` and...

@N-Kingsley Hi, I know what you mean, the code in this repo only adds one attention layer after decoder CNN, and he uses CNN and LSTM and `for` to predict...