Alexander Overvoorde
Alexander Overvoorde
> @OvervCW, > > TF Serving uses "0.0.0.0" to listen on localhost in gRPC as shown [here](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/server.cc#L340-L342) and uses "localhost" to open REST/HTTP API as shown [here](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/server.cc#L418-L419). > > Tensorflow...
@dvaldivia Why would large bodies cause 404 errors?
We are also running into this issue and I can confirm that version 22.10 was also working fine. We started seeing specifically the `Failed to allocate memory for requested buffer...
Until such a package is available, another way to always run an up-to-date version that I use is to run the Docker image: ```shell alias k9s="docker run --pull always --rm...
For us this is currently making the ONNX runtime unusable because we have a lot of segmentation models that are all taking 12x as long compared to TensorFlow/TensorRT.
/remove-lifecycle stale
It is definitely still an issue we are dealing with.
We are currently stuck on onnxruntime 1.18.1 because any version newer than that causes our models to crash with errors like this.
We stopped using onnxruntime and switched to TensorRT due to persistent erratic behavior like this.