Niraj Singh
Niraj Singh
@gabbygab1233, There is no need to mount the models and config files on docker command if you copy your models and configs files to your own serving image. I tried...
Closing this due to inactivity. Please take a look into the answers provided above, feel free to reopen and post your comments(if you still have queries on this). Thank you!
@JCCVW, The devel GPU image for TF Serving which looks to be about 5GB, of which ~1GB is TF Serving and it's build artifacts, with the rest being dependencies pulled...
@sw2921, GPU resource tracker for model server is tracked in `kGpu` as shown [here](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/resources/resource_values.h#L34). Please let us know if this issue can be closed. @huijiao1120, Please make sure you have...
@duonghb53, Underneath the hood, TensorFlow Serving uses the TensorFlow runtime to do the actual inference on your requests. And by default, TensorFlow maps nearly all of the GPU memory of...
@chuyang-deng, Tensorflow Serving prediction APIs are defined as protobufs. Instead of loading Tensorflow and TF Serving depnedency, you can replace them by generating the necessary tensorflow and tensorflow_serving protobuf python...
@sam-huang-1223, DT_BOOL is supported in RESTful API. Please try `true` or `false` as shown in [JSON Mapping](https://www.tensorflow.org/tfx/serving/api_rest#json_mapping) and let us know if you face any issues. Thank you!
Closing this due to inactivity. Please take a look into the answers provided above, feel free to reopen and post your comments(if you still have queries on this). Thank you!
tensorflow_model_server with --model_base_path pointing to a gs:// bucket getting too long to start.
@kleysonr, I am unable to replicate this issue. I tried to load a test model from cloud storage(gs://) bucket and the model loaded instantly without any delay. Please find attached...
@supercharleszhu, Similar Feature request is already in progress #1425 . Requesting you to close this issue and follow similar thread for updates. Thank you!