tobe
tobe
It seems to be the compatible problem for the latest code with `uwsgi` and `pandas`. The code for `py34` is not tuned yet so it may has some bugs. You...
Thanks for reporting. How do you "mount only one gpu devices"? If you run docker container with `-e NVIDIA_VISIBLE_DEVICES='0'`, I think the container can only use one GPU devices.
Thanks for reporting @xxllp . Are you using TensorFlow 2.0?
Thanks for reporting. Have you setup the H2O cluster to run with one H2O instance? It seems to be the problem of network but I'm not sure why it fails...
For the first question, the usage of GPU dependencies on your model and the batch size. The model may be only `340MB` but one of its operation like matrix multiplication...
It may be the problem of the complex of you model. Can you try to inference one image with the pure TensorFlow `Session.run()`?
Hi @serlina , it depends on your TensorFlow SavedModel's op. If you use `tf.decode_base64(model_base64_placeholder)` to process the input data, you may try this client code which has test in our...
It may be the issue of JSON serialization. Can you check the request data which might be string in correct format.
The auth settings are in https://github.com/tobegit3hub/simple_tensorflow_serving/blob/master/simple_tensorflow_serving/server.py#L89 . I'm not sure if it is available in WSGI and any contribution is welcome.
Sorry about that @ashbeats . The latest version of simple-tensorflow-serving(>=0.7.0) requires `uwsgi` to run by default. You may try install with `pip install simple-tensorflow-serving==0.6.6` to use the older one ....