MLServer
MLServer copied to clipboard
Adaptive Batching is enabled but not supported for inference streaming.
Hi,
I am receiving the following warning on my custom model:
"WARNING: Adaptive Batching is enabled for model 'models' but not supported for inference streaming. Falling back to non-batched inference streaming."
What is the problem?