Sebastian Agethen
Sebastian Agethen
Hi Li Daiqiang, I am sorry to say that it seems my university deleted the personal homepage after graduating, so that data is lost, and I don't have a backup...
The reason for 19 timesteps being used only in that example is from the original ConvLSTM. If you check their Theano code, around line 140-150 in mnist_sequence_forecasting...., they take the...
On further inspection, I realize there might be a problem with the way the sequence markers are loaded in the example. If the sequence file contains a tensor 20 x...
Sorry for the late answer! You will need to write your own script for that. The field `id` is the video name; `actions` is a collection of actions `class ts...
Hi, to make sure I understand this: You have T timesteps, and the input data has spatial dimensions 1xW (or Hx1 for that matter)? If that is correct, I believe...
If you have multiple inputs, you could just concatenate them. Caffe has a "Concatenate" layer: ``` layer { name: "concatenate" type: "Concatenate" bottom: "in1" bottom: "in2" bottom: "in3" top: "out"...
Ah, I see! The correct concatenation is along axis = 0, so T*N x C x H x W. After concatenation, you then reshape to T x N x C...
The sequence indicators are just the same as for the default LSTM. At time t, the previous hidden- and cell states are multiplied by that value, allowing you to "reset"...
Hey! Well, that is really difficult to say without having any details. So, you just create a network with the same prototxt as for the C++ standalone? Can you show...
Concerning the latter problem, this is due to the Hadamard term. It takes on the size of a single channel of the hidden state. You can disable that by specifying...