RNN model with fixed batch size
Hi, I have created this RNN model:
import tensorflow as tf
from tensorflow import keras
input = keras.layers.Input(shape=[12, 100], batch_size=1)
x = keras.layers.Dense(20, activation='relu')(input)
x = keras.layers.BatchNormalization()(x)
fc1 = keras.layers.Dense(10, activation=None)(x)
gru1 = keras.layers.LSTM(20, return_sequences=True)(fc1)
gru2 = keras.layers.LSTM(20, return_sequences=True)(gru1)
model = keras.Model(inputs=[input], outputs=[fc1, gru2])
When I try to use cppflow2 to run inference on this model, I get the error:
2020-11-01 07:12:48.086124: E tensorflow/core/framework/tensor.cc:555] Could not decode variant with type_name: "tensorflow::TensorList". Perhaps you forgot to register a decoder via REGISTER_UNARY_VARIANT_DECODE_FUNCTION?
2020-11-01 07:12:48.086856: W tensorflow/core/framework/op_kernel.cc:1744] OP_REQUIRES failed at constant_op.cc:82 : Invalid argument: Cannot parse tensor from proto: dtype: DT_VARIANT
tensor_shape {
}
variant_val {
type_name: "tensorflow::TensorList"
metadata: "\014\000\001\002\003\004\005\006\007\010\t\n\013\001\377\377\377\377\377\377\377\377\377\001\022\002\010\001\022\002\010\024"
}
{{function_node __inference_standard_lstm_1329_specialized_for_StatefulPartitionedCall_StatefulPartitionedCall_functional_1_lstm_PartitionedCall_at_tf_graph_specialized_for_StatefulPartitionedCall_StatefulPartitionedCall_functional_1_lstm_PartitionedCall_at_tf_graph}} {{function_node __inference_standard_lstm_1329_specialized_for_StatefulPartitionedCall_StatefulPartitionedCall_functional_1_lstm_PartitionedCall_at_tf_graph_specialized_for_StatefulPartitionedCall_StatefulPartitionedCall_functional_1_lstm_PartitionedCall_at_tf_graph}} Cannot parse tensor from proto: dtype: DT_VARIANT
tensor_shape {
}
variant_val {
type_name: "tensorflow::TensorList"
metadata: "\014\000\001\002\003\004\005\006\007\010\t\n\013\001\377\377\377\377\377\377\377\377\377\001\022\002\010\001\022\002\010\024"
}
The way I run the inference using cppflow is:
std::vector<float> numberlist(1*12*100, 1.0f);
cppflow::tensor input_1(numberlist, {1, 12, 100});
...
auto output = model({{"serving_default_input_1:0", input_1}}, {"StatefulPartitionedCall:0"});
It seems like the problem is with input = keras.layers.Input(shape=[12, 100], batch_size=1): if I change batch_size=1 to batch_size=None, then the inference runs fine. What is also odd is that the same model (with batch_size=1) runs fine in Python (3.6.9, with TF 2.3.1). This lead me to suspect that some settings or parameters may be incorrect when using the C API of Tensroflow. The reason I need to use batch_size=1 is because I would like to make the LSTM layers stateful, and that requires a fixed batch size. I tried looking up this issue on google but could not find anything even remotely relevant. Is there any insight you could offer?
Hi @DwayneDuane,
I took a look to your problem and I could not figure out why it does not work. I am afraid is not a problem of cppflow but rather of the TF C API.
I could not make it work trying different configurations so I may suggest you opening an issue in the TensorFlow repository, you can check this open issue that reports the same problem.
If you discover new insights about the problem please let me know :)
Hi @DwayneDuane,
I have the same problem, did you maybe find any solution? I would appreciate any help! :)
There is a fix for the previous issue in TF master branch (upcoming 2.6). You can build libtensorflow manually and see if it works. Otherwise, you likely need to report it to tensorflow developers.
On Mon, Jun 7, 2021, 10:43 abucka @.***> wrote:
Hi @DwayneDuane https://github.com/DwayneDuane,
I have the same problem, did you maybe find any solution? I would appreciate any help! :)
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/serizba/cppflow/issues/67#issuecomment-855997595, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALDTHSXMAUSZRMXTRGO473TRTLJLANCNFSM4TGP3NVA .
Closing due to inactivity.