seq2seq-chatbot
seq2seq-chatbot copied to clipboard
tensorflow server signature
I am trying to serve the model over tensorflow serving and I have created the below signature. But it doesnt seem to work. Please help me @pskrunner14
encode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="encode_seqs") decode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="decode_seqs")
Inference Data Placeholders
encode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="encode_seqs") decode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="decode_seqs")
export_path_base = './export_base/' export_path = os.path.join( tf.compat.as_bytes(export_path_base), tf.compat.as_bytes(str(1))) print('Exporting trained model to', export_path) builder = tf.saved_model.builder.SavedModelBuilder(export_path)
classification_inputs = tf.saved_model.utils.build_tensor_info(
encode_seqs)
classification_outputs_classes = tf.saved_model.utils.build_tensor_info(
decode_seqs)
#classification_outputs_scores = tf.saved_model.utils.build_tensor_info(loss)
classification_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={
tf.saved_model.signature_constants.CLASSIFY_INPUTS:
classification_inputs
},
outputs={
tf.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES:
classification_outputs_classes,
#tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES:
#classification_outputs_scores
},
method_name=tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME))
tensor_info_x = tf.saved_model.utils.build_tensor_info(encode_seqs2)
tensor_info_y = tf.saved_model.utils.build_tensor_info(decode_seqs2)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'issue': tensor_info_x},
outputs={'solution': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict_solution':
prediction_signature,
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature,
},
main_op=tf.tables_initializer(),
strip_default_attrs=True)
builder.save()
print('Done exporting!')
I have the below signature,
C:\Users\d074437\PycharmProjects\seq2seq>saved_model_cli show --dir ./export_base/1 --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['predict_solution']: The given SavedModel SignatureDef contains the following input(s): inputs['issue'] tensor_info: dtype: DT_INT64 shape: (1, -1) name: encode_seqs_1:0 The given SavedModel SignatureDef contains the following output(s): outputs['solution'] tensor_info: dtype: DT_INT64 shape: (1, -1) name: decode_seqs_1:0 Method name is: tensorflow/serving/predict
signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['inputs'] tensor_info: dtype: DT_INT64 shape: (32, -1) name: encode_seqs:0 The given SavedModel SignatureDef contains the following output(s): outputs['classes'] tensor_info: dtype: DT_INT64 shape: (32, -1) name: decode_seqs:0 Method name is: tensorflow/serving/classify
But when I try to run it, I get an error as below
C:\Users\d074437\PycharmProjects\seq2seq>saved_model_cli run --dir ./export_base --tag_set serve --signature_def predict_solution --inputs='this is the text' usage: saved_model_cli [-h] [-v] {show,run,scan} ... saved_model_cli: error: unrecognized arguments: is the text'
@nidhikamath91 sorry I'm not very familiar with TensorFlow serving. You'd be better off posting this on their issues itself. Although this seems like CLI argument error.
Thanks. But could you help me with how can I create place holders for the input data and use them in data.py .
Eg get a placeholder for input query
@nidhikamath91 according to what I could gather from the MNIST example on TF serving, I think your
tensor_info_y
needs to be the score outputs or in this case the softmax prediction defined here
...
y = tf.nn.softmax(net.outputs)
...
...
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
And what would be x? The input query, how do i represent that?
Regards Nidhi
On Tue, Oct 16, 2018, 4:20 PM Prabhsimran Singh [email protected] wrote:
@nidhikamath91 https://github.com/nidhikamath91 according to what I could gather from the MNIST example on TF serving, I think your tensor_info_y needs to be the score outputs or in this case the softmax prediction defined here https://github.com/tensorlayer/seq2seq-chatbot/blob/master/main.py#L97
... y = tf.nn.softmax(net.outputs) ... ... tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorlayer/seq2seq-chatbot/issues/27#issuecomment-430256124, or mute the thread https://github.com/notifications/unsubscribe-auth/Ac4isDJvT8qa621FVA_UNuzSgk1laM-kks5ulesegaJpZM4XeQBM .
@nidhikamath91 your def of x looks fine to me. However I'm not sure this is gonna work since we're feeding the encoder state
to the decoder during inference then feeding the decoded sequence ids
from the previous time steps one by one to the decoder until it outputs the end_id
, so AFAIK you'll need to find a workaround for that. You should take a look at the inference method here and see what works for you. Best of luck!
In the below part
input_seq = input('Enter Query: ') sentence = inference(input_seq)
Arent you feeding the input query to the inference method? How can i take that input sequence as my x?
On Tue, Oct 16, 2018, 4:31 PM Prabhsimran Singh [email protected] wrote:
Your def of x looks fine to me. However I'm not sure this is gonna work since we're feeding the encoder state to the decoder during inference then feeding the decoded sequence ids from the previous time steps one by one to the decoder until it outputs the end_id, so AFAIK you'll need to find a workaround for that. You should take a look at the inference method here https://github.com/tensorlayer/seq2seq-chatbot/blob/master/main.py#L115 and see what works for you. Best of luck!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorlayer/seq2seq-chatbot/issues/27#issuecomment-430260556, or mute the thread https://github.com/notifications/unsubscribe-auth/Ac4isFYwMuaf8fj4B70X3g8gaG-1_2tZks5ule3BgaJpZM4XeQBM .
@nidhikamath91 we're manually converting the input query into token ids
that are fed into the encoder as encode_seqs2
and then feed the encoder state
to the decoder to decode in time steps as I explained above. There's manual unrolling involved so I'm not sure how you'll get your desired output by just the query. As I said I'm not familiar with TF serving to be able to help you with that and it is beyond the scope of this example.
@nidhikamath91 apparently TF serving doesn't support stateful models.
https://github.com/tensorflow/serving/issues/724
Interesting. Could you send me a link to where you found this?
On Tue, Oct 16, 2018, 4:56 PM Prabhsimran Singh [email protected] wrote:
@nidhikamath91 https://github.com/nidhikamath91 apparently TF serving doesn't support stateful models.
tensorflow/serving#724 https://github.com/tensorflow/serving/issues/724
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorlayer/seq2seq-chatbot/issues/27#issuecomment-430270423, or mute the thread https://github.com/notifications/unsubscribe-auth/Ac4isGbepuVnKDpzzvOacTaEYHJqygDxks5ulfOjgaJpZM4XeQBM .
https://stackoverflow.com/questions/49471395/adding-support-for-stateful-rnn-models-within-the-tf-serving-api
Can i convert it to stateless in any way?
On Tue, Oct 16, 2018, 5:13 PM Prabhsimran Singh [email protected] wrote:
https://stackoverflow.com/questions/49471395/adding-support-for-stateful-rnn-models-within-the-tf-serving-api
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorlayer/seq2seq-chatbot/issues/27#issuecomment-430276965, or mute the thread https://github.com/notifications/unsubscribe-auth/Ac4isKeREZn-C84PIzofOI9i7H7TfGHhks5ulfeUgaJpZM4XeQBM .
@nidhikamath91 I don't think that's possible since it's an autoencoding RNN application.
Hello,
So I was thinking of the below solution, tell me what do you thinking about it.
I will create a inference graph with issue and solution placeholders and then serve the model.
encode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="encode_seqs") decode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="decode_seqs")
issue = tf.placeholder(dtype=tf.string, shape=[1, None], name="issue") solution = tf.placeholder(dtype=tf.string, shape=[1, None], name="solution")
table = tf.contrib.lookup.index_table_from_file(vocabulary_file=str(word2idx.keys()), num_oov_buckets=0) seed_id = table.lookup(issue)
state = sess.run(net_rnn.final_state_encode, {encode_seqs2: seed_id}) # Decode, feed start_id and get first word [https://github.com/zsdonghao/tensorlayer/blob/master/example/tutorial_ptb_lstm_state_is_tuple.py] o, state = sess.run([y, net_rnn.final_state_decode], {net_rnn.initial_state_decode: state, decode_seqs2: [[start_id]]}) w_id = tl.nlp.sample_top(o[0], top_k=3) w = idx2word[w_id] # Decode and feed state iteratively sentence = [w] for _ in range(30): # max sentence length o, state = sess.run([y, net_rnn.final_state_decode], {net_rnn.initial_state_decode: state, decode_seqs2: [[w_id]]}) w_id = tl.nlp.sample_top(o[0], top_k=2) w = idx2word[w_id] if w_id == end_id: break sentence = sentence + [w] return sentence
But I am getting the below error
TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.For reference, the tensor object was Tensor("hash_table_Lookup:0", shape=(1, ?), dtype=int64) which was passed to the feed with key Tensor("encode_seqs_1:0", shape=(1, ?), dtype=int64).
How do I proceed ? @pskrunner14
@nidhikamath91 you can't feed tensors into input placeholders, just convert them to numpy arrays before doing so.
But converting them to numpy arrays is possible ony during runtime right ? with eval()
[image: Mailtrack] https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5& Sender notified by Mailtrack https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5& 30.10.18, 15:02:28
On Tue, Oct 30, 2018 at 2:57 PM Prabhsimran Singh [email protected] wrote:
@nidhikamath91 https://github.com/nidhikamath91 you can't feed tensors into input placeholders, just convert them to numpy arrays before doing so.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorlayer/seq2seq-chatbot/issues/27#issuecomment-434310720, or mute the thread https://github.com/notifications/unsubscribe-auth/Ac4isLIidFd1nAlJVxYihBWNpiaA7jT_ks5uqFqvgaJpZM4XeQBM .
@nidhikamath91 yes you could try constructing the graph in such a way that you only need to feed the seed id into issue placeholder at runtime which in turn passes the lookup tensor to the net rnn directly, bypassing encode_seqs2 entirely.
Right now i was building a inference graph such that I pass the issue string to the issue placeholder, create lookup tensor and pass it to net rnn but how do i pass it to net rnn?
On Tue, Oct 30, 2018, 3:15 PM Prabhsimran Singh [email protected] wrote:
@nidhikamath91 https://github.com/nidhikamath91 yes you could try constructing the graph in such a way that you only need to feed the seed id into issue placeholder at runtime which in turn passes the lookup tensor to the net rnn directly, bypassing encode_seqs2 entirely.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorlayer/seq2seq-chatbot/issues/27#issuecomment-434317764, or mute the thread https://github.com/notifications/unsubscribe-auth/Ac4isOeW-K9S-_PnFGaEnmyLfHPahifoks5uqF7wgaJpZM4XeQBM .
On Oct 30, 2018 3:15 PM, "Prabhsimran Singh" [email protected] wrote:
@nidhikamath91 https://github.com/nidhikamath91 yes you could try constructing the graph in such a way that you only need to feed the seed id into issue placeholder at runtime which in turn passes the lookup tensor to the net rnn directly, bypassing encode_seqs2 entirely.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorlayer/seq2seq-chatbot/issues/27#issuecomment-434317764, or mute the thread https://github.com/notifications/unsubscribe-auth/Ac4isOeW-K9S-_PnFGaEnmyLfHPahifoks5uqF7wgaJpZM4XeQBM .