ConvLSTM_pytorch icon indicating copy to clipboard operation
ConvLSTM_pytorch copied to clipboard

What is the difference between hidden_state and hidden_dim?

Open yustiks opened this issue 4 years ago • 2 comments

I saw that in the code, hidden_state is not implemented:

    def forward(self, input_tensor, hidden_state=None):
        """

        Parameters
        ----------
        input_tensor: todo
            5-D Tensor either of shape (t, b, c, h, w) or (b, t, c, h, w)
        hidden_state: todo
            None. todo implement stateful

meanwhile, hidden_dim is given. What is the difference between those two variables?

yustiks avatar Oct 24 '21 19:10 yustiks

I have the same problem... and why len(kernel_size) == len(hidden_dim) == num_layers needs to be true?

yougrianes avatar Mar 16 '22 03:03 yougrianes

I have the same problem... and why len(kernel_size) == len(hidden_dim) == num_layers needs to be true?

ohh.... I think it just want to specify params for every of conv-lstm cell, not just a simple-copy.

yougrianes avatar Mar 16 '22 03:03 yougrianes