serving
serving copied to clipboard
A flexible, high-performance serving system for machine learning models
Remove the label "stat:awaiting response", when issue/PR unstale.
## Feature Request ### Describe the problem the feature is intended to solve Recently I found that building tensorflow serving with tcmalloc and set soft limit can mitigating these kind...
## Feature Request If this is a feature request, please fill out the following form in full: ### Describe the problem the feature is intended to solve While using Tensorflow...
Truncated messages as described in https://github.com/tensorflow/serving/issues/2164 kills debug-ability. Rather than expecting the client to receive potentially large messages, we simply log the full message on the server. Testing: WIP
## Feature Request ### Describe the problem the feature is intended to solve For now, TensorFlow Serving doesn't export build information for Prometheus. If tf serving exports metrics like [promethues](https://github.com/prometheus/common/blob/94bf9828e56d9670579b28a9f78237d3cd8d0395/version/info.go#L41-L60)...
We want to monitor memory usage of TensorFlow serving runtime on a per model basis. Currently we can get the memory used by Tensorflow completely but we dont have a...
Need to use -O (output file) instead of -o (log file)
## Feature Request ### Describe the problem the feature is intended to solve Tensorflow Serving should have docker images with AVX2 and FMA et al enabled, with different tags so...
OS version: CentOS7 tfserving version: [tensorflow/serving:2.10.0-gpu](https://hub.docker.com/layers/tensorflow/serving/2.10.0-gpu/images/sha256-183724e62d47acc5b9fa93ddbcb7eeedbfb0ead28cbe2a0a6e5fa27abcebbcb6) kube version: v1.27.1 containerd version: v1.6.6 I am trying to run tfserving in a bare-metal kubernetes cluster. Since the official did not provide an...