serve icon indicating copy to clipboard operation
serve copied to clipboard

How to access API outside of localhost?

Open kkmehta03 opened this issue 3 years ago • 6 comments

Hi!

I have Wireguard in my machine and have a few other devices connected with it. Let's say my Wireguard IP is 10.0.0.1, then in the config.properties file, I change inference_address=http://10.0.0.1:8080. I'm able to use the API locally but unable to do so outside of the device (I keep getting a timeout error).

What I've tried so far: I have also tried changing inference_address to 0.0.0.0:8080, but that didn't help either. Even running it on a different port like 0.0.0.0:5000 didn't help. If I use a tunnel (like ngrok) and expose port 8080, it works perfectly.

If I have another application running on a separate port, that is accessible by my other device.

Can someone help out?

Thanks!

kkmehta03 avatar Jul 07 '21 07:07 kkmehta03

I'm able to use the API locally but unable to do so outside of the device (I keep getting a timeout error).

Could you elaborate a bit more on how you're accessing the device where torchserve is running? Repro instructions would be super helpful.

msaroufim avatar Jul 12 '21 05:07 msaroufim

Yeah sure. So the device where torchserve is running, has wireguard installed and running. I access it through a different machine via ssh. I'm able to access the machine but unable to access the APIs where torchserve is running.

kkmehta03 avatar Jul 22 '21 05:07 kkmehta03

To reproduce the issue,

  1. Add these lines into your config.properties file:
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
  1. Then, type this in command line: torchserve --start --ncs --model-store model_store --models <model name> --ts-config config.properties
  2. Now, ensure this device where the model is running can be accessible by another device. (with wireguard or any other VPN)
  3. Now try to trigger the inference API from that other device. You'll keep getting a timeout because even after configuring global address, it's still not accessible from outside the machine.

kkmehta03 avatar Jul 22 '21 07:07 kkmehta03

Hello! Can anyone help out please? I'd really appreciate it! I see in the documentation it says

TorchServe doesn’t support authentication natively. To avoid unauthorized access, TorchServe only allows localhost access by default. The inference API is listening on port 8080. The management API is listening on port 8081. Both expect HTTP requests. These are the default ports. See Enable SSL to configure HTTPS.

Is Https needed to be able to access inference API from outside the machine?

kkmehta03 avatar Aug 11 '21 08:08 kkmehta03

@KhyatiMehta3 HTTPS is not required to access Inference API from outside and can access over pure HTTP. Are you facing any issues when using HTTP?

chauhang avatar Sep 15 '21 05:09 chauhang

This sounds closely related to the issue 1626 and my observed solution/work-around is to drop the use of the config.properties file all together and pass those parameters in the command line.

dmuiruri avatar May 13 '22 07:05 dmuiruri

Hello, exactly the same issue here, can someone help? I have two docker containers, one with torchserve and the other one with a flask web app. The inference API at port 8080 is not available from the other container, even though I have set the inference_address to be http://0.0.0.0:8080. I can access applications at other ports from the other container, it is therefore just an issue in torchserve itself. Also, I can access the API from the torchserve container, so the model is correctly loaded and inference works.

fkmjec avatar Mar 17 '23 09:03 fkmjec