Convolutional_LSTM_PyTorch
Convolutional_LSTM_PyTorch copied to clipboard
Multi-layer convolutional LSTM with Pytorch
I want to understand why you set self.num_features=4 in line 15 ? Thanks your response
The first problem is that in ConvLSTM.forward, the code is using the same x = input in multiple timesteps. I guess the input shape of forward func. shall be changed...
input = Variable(torch.randn(1, 512, 64, 32)).cuda() one for batchsize, ont for channel, the last two for H and W, Where is the squence length
[The LSTM paper](ftp://ftp.idsia.ch/pub/juergen/TimeCount-IJCNN2000.pdf) defines [a specific rule](https://i.imgur.com/peOKqkL.png) for gradient updates of the 'peephole' connections. Specifically: >[...] during learning no error signals are propagated back from gates via peephole connections to...
Is there any reason for initializing Wci_o_f in ConvLSTMCell as autograd Variables rather than nn.Parameters
Hello : I have a question of the output of the first conv_lstm layer, what's the shape if it ?