keda
keda copied to clipboard
Redis Strings pattern
Proposal
A scaler based on strings pattern.
Some tools such Bull and BullMQ are working with queues based on Redis with the following Redis KEYS pattern:
tool
:queue
:jobID
127.0.0.1:6379> KEYS *
4) "bull:myqueue:item1"
5) "bull:myqueue:item2"
So the scaler could be based on this: tool
:queue
:*
to count the number of items in the queue.
Scaler Source
Redis
Scaling Mechanics
Exactly like the Redis List scaler, but not based on list length.
Instead, use the keys pattern to match all the keys wanted.
Authentication Source
Same as the other Redis scalers
Anything else?
I guess this scaler could be pretty easy to create 😄 as it is mostly copy-paste a lot of parts from the Redis Lists scaler.
Furthermore, there is already a prometheus exporter built for BullQ with Golang, that can help to get the queue lengths: https://github.com/UpHabit/bull_exporter/blob/master/src/metricCollector.ts#L79
A simple nodeJS code to push items to a queue named default:
npm init -y
npm install bullmq
import { Queue } from 'bullmq';
const queue = new Queue('default');
queue.add('car', { color: 'blue' });
queue.add('boat', { color: 'red' });
I can provide extensive testing on this scaler. (I sadly don't have any golang experience)
Would it make sense to support this string pattern approach or rather a Bull scaler instead? I'm more wondering if we shouldn't do the latter?
Hello @tomkerkhove,
I quickly checked for another language like for python: https://github.com/rq/rq/blob/master/rq and it seems they follow the same pattern https://github.com/rq/rq/blob/master/rq/queue.py#L42
So we could implement it only for BullQ, or try to make it more global with the Strings pattern?
As the only change would be how it checks the first "namespace"/"tool" value: tool
:queue
:*
As for bullq they name it bull:queue:
, for rq they use rq:queue
, ...
What do you think?
Edit: Seems like for a popular Golang package they follow the same format as well
Where they use asynq:{<qname>}:t:<task_id>
https://github.com/hibiken/asynq/blob/94719e325cc89f3c1fd56919929212977de97616/internal/rdb/rdb.go#L86
So either we give the ability to specify a string matching pattern such:
stringPattern: "bull:myawesomequeue:"
or we make it more focused on bullq with something like:
queue: "myawesomequeue"
And under the hood it will use bull:myawesomequeue:
But then later on, if someone wants to create a scaler for a similar use-case (such rq or asynq), it might be just copy paste and change 1 line in the scaler file.
In that case I'd rather go with a prefix then, no?
Yes, I think the prefix option makes more sense (as for example in asynq package, they add a :t:
that is not in rq or bullq: asynq:{<qname>}:t:<task_id>
) .
So matching a pattern / prefix is probably a better idea 👍
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
how to move this forward?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity.
This still happens: https://kubernetes.slack.com/archives/CKZJ36A5D/p1679932718750589
Hi,
We encountered the similar issue and we need to scale up/ scale down our pods based on the Bull queue length.
I can provide help and testing on our platform :)
Hi,
We encountered the similar issue and we need to scale up/ scale down our pods based on the Bull queue length.
I can provide help and testing on our platform :)
Nice! Thanks for it! ❤️
FYI, you can scale up the pods on bullQ/bullMQ when following a similar structure:
- type: redis
metadata:
hostFromEnv: REDIS_HOST
portFromEnv: REDIS_PORT
passwordFromEnv: REDIS_PASSWORD
listName: "bull:default:wait" # bull is the prefix & default is the queue name
listLength: "50"
enableTLS: "false"
databaseIndex: "0"
As it creates a redis list on bull:default:wait
. Just be careful, as it only scales up on the waiting jobs, it can flap if you downscale too fast and your jobs are too slow.
Perfect, it works perfectly :)
Just a little contribution for N8N user, you need to use this structure :
- type: redis
metadata:
address: {{ .Values.scaling.worker.redis.host }}
listName: "bull:jobs:wait"
listLength: "50"
enableTLS: "false"
databaseIndex: "0"
so, I guess this is solved. I close the issue again
N8N
@mathieuperochon could you please open a PR to our docs and add a note about this to the Redis Scaler documentation? https://keda.sh/docs/2.10/scalers/redis-lists/
Thanks!