Possible memory leak
We are experiencing a continuous memory increase in flower running on Kubernetes.
Flower version: 0.9.2 Docker file: https://hub.docker.com/r/ovalmoney/celery-flower/~/dockerfile/ Parameters:
- FLOWER_PORT
- CELERY_BROKER_URL
- FLOWER_BROKER_API
- FLOWER_BASIC_AUTH
Queues: 7 Workers: 17
Does it grow with tasks ran? What's your max tasks set to?
I don't want to create a new issue, so here's some details from my case:
I have an instance of Flower running on Marathon monitoring Redis-based Celery workers. On regular basis the instance is OOM killed by scheduler because of what looks like a obvious memory leak.
Queues: 42 Workers: At least 38 (micro-scaled based on the queues length)
The worst thing is that it happens even if we leave everything idle, not processing tasks.
Flower 0.9.3 running as a supervised process.
Having same issue with flower 0.9.5 running in a kubernetes cluster.
https://github.com/mher/flower/pull/1111 might be the cause of memory leak. Please try the latest master version
Having the same issue with the latest image available on dockerhub running in kubernetes.
With flower==1.0.0 (latest version on PyPI):

With
flower==1.0.0(latest version on PyPI):
Any plan on publishing a docker image with flower 1.0.0 in dockerhub? We built one internally to try 1.0.0 in k8s, but would prefer getting it from dockerhub if possible.
A container running in OpenShift from docker.io/mher/flower:1.2.0 image is periodically OOMKilled and restarted. We have just 2 queues and 3 workers.

I can confirm that same behaviour seems to happen in our EKS cluster using flower 2.0.1: