[bitnami/tensorflow-serving] I want to know how to use the api to access
Name and Version
bitnami/tensorflow-serving:latest
What architecture are you using?
arm64
What steps will reproduce the bug?
- I run the docker through the following command:
docker run -d -p 8501:8501 --name tensorflow-serving
--volume /Users/mima0000/Desktop/model:/bitnami/model-data \
-e MODEL_NAME=model
bitnami/tensorflow-serving:2.14.1 - I use the post to access the serve
I follow the official instructions, but it did not work out
curl -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://localhost:8501/v1/bitnami/model-data/model/:predict
{
"error": "Malformed request: POST /v1/bitnami/model-data/model/:predict"
}%
What is the expected behavior?
I want to know how to use it
What do you see instead?
mima0000@MacBook-Air Desktop % curl -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://localhost:8501/v1/bitnami/model-data/model/:predict
{
"error": "Malformed request: POST /v1/bitnami/model-data/model/:predict"
}%
Additional information
No response
I might add that what the correct Post address is and the how the post method of use differ from the official one. Thank you for answering my question
Hi,
Could you share which instructions you followed? It seems to me that the issue is not related with the Bitnami packaging of Tensorflow but with the usage of Tensorflow itself. Did yo check with the upstream developers? Maybe their documentation is not updated.
I followed the https://github.com/tensorflow/serving
what's more, the tensorflow website https://tensorflow.google.cn/tfx/serving/api_rest also provides the predict api format. this is POST http://host:port/v1/models/${MODEL_NAME}[/versions/${VERSION}|/labels/${LABEL}]:predict,I can run successfully through the mirror image given by the tensorflow authorities, so I want to know about in arm64 in your docker, the he predict api format is changed?
Hi @crash-zwt
As you can see the Bitnami image is indeed exposing by default Tensorflow REST API in the port 8501:
- https://github.com/bitnami/containers/tree/main/bitnami/tensorflow-serving#customizable-environment-variables
However, the error you're obtaining seems to be a problem with your POST request. Why are you using this endoint http://localhost:8501/v1/bitnami/model-data/model/:predict? Shouldn't it be http://localhost:8501/v1/models/model:predict?
I have already tried the url http://localhost:8501/v1/models/model:predict, but it is still an error
Hi @crash-zwt
Could you please follow these steps?
$ git clone [email protected]:tensorflow/serving.git
$ export TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
$ docker run -d -p 8501:8501 \
--name tensorflow-serving \
--volume "$TESTDATA/saved_model_half_plus_two_cpu:/bitnami/model-data" \
-e TENSORFLOW_SERVING_MODEL_NAME=half_plus_two \
bitnami/tensorflow-serving:2.14.1
$ curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
{
"predictions": [2.5, 3.0, 4.5
]
}
H! @juan131
I have followed your steps and it had this error:
This is the logs
Sorry, my fault! Use TENSORFLOW_SERVING_MODEL_NAME instead of MODEL_NAME as environment variable
It still don't work
Please ensure the --volume flag has this value: "$TESTDATA/saved_model_half_plus_two_cpu:/bitnami/model-data"
yes, the model has be copied to the /bitnami/model-data. The are two subdirectories in the /bitnami directory. The tensorflow-serving folder is empty, and the model-data folder has the mounted model.
Hi @crash-zwt
Please note under the /bitnami/model-data there shouldn't be a "half_plus_two" folder but directly the child one: "000000123"
Thank you, it did work!