tf_playground
tf_playground copied to clipboard
support for regression model
to get a regression value from DNNLinearCombinedRegressor, can I do it with Predict SignatureDef or do I have to change it to Regression SignatureDef and do i need to make a custom signature_def_map when exporting
@samithaj Predict SignatureDef
is a very versatile signature, it accepts an arbitrary amount of input and output tensors for inference, while Regression SignatureDef
only 1 input and 1 output (more here SignatureDefs), so you can definetly use it.
Also, you need to keep in mind that you can map any tensor, but you need to know/specify their names. The generic ones are 'inputs' and 'outputs'.
Is it possible to get a regression output from Predict SignatureDef
or do i have to use Regression SignatureDef
seems like that need some hazzle , and Predict SignatureDef
seems easier
When using the Classify and Regress APIs, TensorFlow Serving feeds serialized tf.Examples to the graph, so your serving_input_receiver_fn() should include a tf.parse_example() Op. When using the generic Predict API, however, TensorFlow Serving feeds raw feature data to the graph, so a passthrough serving_input_receiver_fn() should be used.
https://www.tensorflow.org/versions/r1.3/get_started/export
It just works , i did't see it because pycharm debugger doesn't show tensors correctly ,when i print the result_future.result() the regression prediction is there
But seems like your way of exporting only exports real_valued_columns
you can check it in the client, if you change any _bytes_feature
's in feature_dict's name
'education': 'gender': to any other
ex: 'educationGG': it doesn't gave any error
I'm trying to solve this by changing all functions from tf.contrib
to new canned functions
@samithaj I updated the tutorial to work with r1.3. Please, check if it works for you, and if you have any problems I'd be willing to help.
Still it does not export the embedding_column types ,i think it only exports the numeric_column types
so my predictions are totally different from vs predictions performed inside python (est.predict)
You can see clearly this by when you comment a _float_feature in wide_and_deep_client.py it gives an error but when you comment a _bytes_feature it doesn't give a error
I found a cloud ml example here
and it uses a custom serving_input_fn to do this but it's quring it with gcloud ml-engine local predict
so not sure about how to get it working it for python client
I'm not sure if I can help, but if you'll post code examples of what you are trying to do (what doesn't work), and an example of what you would like to achieve - I can try to extrapolate.