ConvLSTM_pytorch
ConvLSTM_pytorch copied to clipboard
What is the difference between hidden_state and hidden_dim?
I saw that in the code, hidden_state is not implemented:
def forward(self, input_tensor, hidden_state=None):
"""
Parameters
----------
input_tensor: todo
5-D Tensor either of shape (t, b, c, h, w) or (b, t, c, h, w)
hidden_state: todo
None. todo implement stateful
meanwhile, hidden_dim is given. What is the difference between those two variables?
I have the same problem... and why len(kernel_size) == len(hidden_dim) == num_layers needs to be true?
I have the same problem... and why
len(kernel_size) == len(hidden_dim) == num_layersneeds to be true?
ohh.... I think it just want to specify params for every of conv-lstm cell, not just a simple-copy.