Niraj Singh

Results 263 comments of Niraj Singh

@DachuanZhao, As mentioned in [Model Server Configuration](https://www.tensorflow.org/tfx/serving/serving_config#model_server_configuration), we can instruct Tensorflow Serving to periodically poll for updated versions of the configuration file at the specifed path by setting the `--model_config_file_poll_wait_seconds`...

Closing this due to inactivity. Please take a look into the answers provided above, feel free to reopen and post your comments(if you still have queries on this). Thank you!

@spate141/ All, `AssertionError: Tried to export a function which references 'untracked' resource Tensor("308003:0", shape=(), dtype=resource).` can be solved by not defining trainable layers as class attributes in your sub-classes as...

@alfredodeza, @yalcinaa, Setting [servable_versions_always_present](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/server_core.h#L169-L174) param to `True` will allow TF Serving to fail if wrong model or model path is provided to the model. Once model server fails, you can...

@yalcinaa, we have to add the `servable_versions_always_present: true` param in [model_config_list](https://www.tensorflow.org/tfx/serving/serving_config#model_server_config_details) and pass it to model server. Thanks

@alfredodeza, This sounds like a feature we need to work on to implement. Let me bring this up to the team internally. In meanwhile, Please feel free to create a...

@supermoos, As [Serverless Inference Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html) works for you, please let us know it this issue can be closed. You can also try cluster or node pool with Spot VMs...

@OvervCW, TF Serving uses "0.0.0.0" to listen on localhost in gRPC as shown [here](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/server.cc#L340-L342) and uses "localhost" to open REST/HTTP API as shown [here](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/server.cc#L418-L419). Tensorflow serving with docker uses localhost...

@liumilan, Can you please compare the time taken to generate predictions using Tensorflow runtime and then Tensorflow Serving. Underneath the hood, TensorFlow Serving uses the TensorFlow runtime to do the...

Closing this due to inactivity. Please take a look into the answers provided above, feel free to reopen and post your comments(if you still have queries on this). Thank you!