django-q
django-q copied to clipboard
FIFO queue for AWS SQS broker
I'd like to switch from Redis broker to SQS for increase the reliability of my task queue.
After reading the source code I noticed that the broker will create a LIFO queue (the default with boto3), while I need a FIFO. I would say the majority of users also needs a FIFO, but I may very well be wrong.
Workaround
I could create an SQS FIFO queue by hand (via AWS console or AWS cli) with the same name as my Django-Q config, but I don't like this "hidden" process.
EDIT: It would not work anyway, as SQS FIFO queues require their messages to hold the MessageGroupId and the MessagededuplicationID extra arguments when enqueuing.
Suggestion
To avoid any breaking change, I suggest adding an optional config parameter for SQS called fifo. By default, we would have {'fifo': False} (eg same behavior as today) but it can be overriden to True to create a FIFO queue.
After doing some experimentation, it's a bad idea as it impairs the concurrency feature, which is the key.
Waiting for other opinions (in case there is something to be done anyway), but if my second thoughts are correct, feel free to discard this issue and the associated PR.
As far as I understand, SQS will by default use a standard queue, which tries to be a FIFO but has not guarantee for absolute monotonicity or exactly-once processing: . For most use cases this is good enough. The SQS FIFO queue adds a layer of complexity to the broker as you found out. Maybe I will adapt this at some point. I am currently looking into using streaming brokers like Redis Streams and Kafka, which employ the same pattern of group identifiers to control order and deliveries.
Hi @Koed00, I'm also wondering if ORM supports FIFO in multiple clusters on multiple machines scenario?