serving icon indicating copy to clipboard operation
serving copied to clipboard

gRPC/REST TCP sockets cannot be bound to a specific host

Open OvervCW opened this issue 3 years ago • 3 comments

Feature Request

Describe the problem the feature is intended to solve

The hostname 0.0.0.0 is hardcoded in the source code (with a comment that mistakenly claims that this results in listening on localhost):

https://github.com/tensorflow/serving/blob/e93dc58810f6758c2367db8ec76e74e7b3633120/tensorflow_serving/model_servers/server.cc#L347-L349

I would like to be able to have the gRPC & REST APIs actually listen on localhost/127.0.0.1 so that the APIs are not exposed to the outside.

I'm aware of the option to create a UNIX socket instead using --grpc_socket_path, which I suppose I could proxy to a TCP socket that does listen on localhost. However, there's no such option for the REST API. The only option is to disable it completely by setting --rest_api_port=0.

Describe the solution

I would like there to be new command-line flags to set a hostname for the gRPC & REST APIs to listen on. The existing --port and --rest_api_port arguments could even be extended to accept a string like 127.0.0.1:8500.

Describe alternatives you've considered

The flag --grpc_socket_path is an alternative for the gRPC API, but there is no alternative for the REST API.

Additional context

I want to use this for network security purposes to not expose TensorFlow Serving more than it needs to be.

Temporary workaround

One way to temporarily solve this problem without having to fork TFS is to override the bind call using LD_PRELOAD, for example by using this snippet. You can then have TensorFlow Serving listen on localhost by running it like this:

BIND_ADDR="127.0.0.1" LD_PRELOAD=/bind_override/bind.so tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=model --model_base_path=/models/model

The log messages will continue to say that TFS is listening on 0.0.0.0, but netstat shows that it will be listening on localhost instead:

$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:8500          0.0.0.0:*               LISTEN      11/tensorflow_model 
tcp        0      0 127.0.0.1:8501          0.0.0.0:*               LISTEN      11/tensorflow_model

OvervCW avatar Jan 25 '22 17:01 OvervCW

Deploying docker container for rest api in server is very problematic. It is a very basic feature.

shba007 avatar Apr 26 '23 07:04 shba007

@OvervCW,

TF Serving uses "0.0.0.0" to listen on localhost in gRPC as shown here and uses "localhost" to open REST/HTTP API as shown here.

Tensorflow serving with docker uses localhost endpoints for REST API endpoint. Example reference: here. Hope this resolves the security problem. Thank you!

singhniraj08 avatar Apr 28 '23 06:04 singhniraj08

@OvervCW,

TF Serving uses "0.0.0.0" to listen on localhost in gRPC as shown here and uses "localhost" to open REST/HTTP API as shown here.

Tensorflow serving with docker uses localhost endpoints for REST API endpoint. Example reference: here. Hope this resolves the security problem. Thank you!

0.0.0.0 is not localhost, it means "all interfaces" and this difference matters in a Kubernetes cluster or Docker network where each pod/container has its own IP address. It needlessly exposes a service to other containers/pods on the same network.

OvervCW avatar May 01 '23 08:05 OvervCW