nats-queue-worker
nats-queue-worker copied to clipboard
Dynamic max_inflight
Expected Behaviour
I have a function which runs ffmpeg to convert a video. It's CPU bound, so I've used HPAv2 to autoscale the pods running my function. This works great, but since my functions take awhile to finish, I'm using the queue worker to do async processing. My issue is, I cannot dynamically set max_inflight
. Ideally I would like each pod running my function to process n tasks at once. In my case, n would be set to 1. If I set max_inflight
to 1, even though my autoscaler would bring up a 2nd pod, it would never be used in parallel, since the queue worker only schedules one at a time. If I set the max_inflight
to a higher value, I risk invoking the function multiple times before my autoscaler can kick in, and my long running function will be using the same pod for multiple tasks. Ideally I would like to have max_inflight
mirror n*pod_count. I could scale the queue workers themselves, but they would have to perfectly mirror the count of function pods at any given moment. Is there some way to tell the queue worker to always have the same number of max_inflight
as the number of instances of my functions? Without this, autoscaling openfaas is really limited to non-async functions.
Current Behaviour
Possible Solution
Steps to Reproduce (for bugs)
Context
I'm trying to autoscale my openfaas function, which I call asynchronously.
Your Environment
-
Kubernetes 1.24
-
Are you using Docker Swarm or Kubernetes (FaaS-netes)?
-
Kubernetes FaaS-netes with CRD
-
Operating System and version (e.g. Linux, Windows, MacOS):
-
Linux
-
Link to your project or a code example to reproduce issue:
Hi @half2me
On our blog we have an article that describes a pattern to handle this use-case. Take a look and let us know if you have any additional questions. https://www.openfaas.com/blog/limits-and-backpressure/
Regards, Han
/add label: support,question