flower icon indicating copy to clipboard operation
flower copied to clipboard

Possible memory leak

Open SharpEdgeMarshall opened this issue 8 years ago • 17 comments

We are experiencing a continuous memory increase in flower running on Kubernetes.

Flower version: 0.9.2 Docker file: https://hub.docker.com/r/ovalmoney/celery-flower/~/dockerfile/ Parameters:

  • FLOWER_PORT
  • CELERY_BROKER_URL
  • FLOWER_BROKER_API
  • FLOWER_BASIC_AUTH

Queues: 7 Workers: 17

screen shot 2018-01-15 at 11 17 05

SharpEdgeMarshall avatar Jan 15 '18 10:01 SharpEdgeMarshall

Does it grow with tasks ran? What's your max tasks set to?

johnarnold avatar Apr 09 '18 17:04 johnarnold

I don't want to create a new issue, so here's some details from my case:

I have an instance of Flower running on Marathon monitoring Redis-based Celery workers. On regular basis the instance is OOM killed by scheduler because of what looks like a obvious memory leak.

image

Queues: 42 Workers: At least 38 (micro-scaled based on the queues length)

The worst thing is that it happens even if we leave everything idle, not processing tasks.

Flower 0.9.3 running as a supervised process.

jsynowiec avatar Jan 31 '20 12:01 jsynowiec

Having same issue with flower 0.9.5 running in a kubernetes cluster.

Natalique avatar May 13 '21 13:05 Natalique

https://github.com/mher/flower/pull/1111 might be the cause of memory leak. Please try the latest master version

mher avatar May 30 '21 23:05 mher

Having the same issue with the latest image available on dockerhub running in kubernetes.

jrochette avatar Jan 31 '22 16:01 jrochette

With flower==1.0.0 (latest version on PyPI):

image

martin-thoma avatar May 03 '22 07:05 martin-thoma

With flower==1.0.0 (latest version on PyPI):

Any plan on publishing a docker image with flower 1.0.0 in dockerhub? We built one internally to try 1.0.0 in k8s, but would prefer getting it from dockerhub if possible.

jrochette avatar May 03 '22 15:05 jrochette

A container running in OpenShift from docker.io/mher/flower:1.2.0 image is periodically OOMKilled and restarted. We have just 2 queues and 3 workers.

Screenshot from 2023-01-20 10-24-05

jpopelka avatar Jan 20 '23 09:01 jpopelka

I can confirm that same behaviour seems to happen in our EKS cluster using flower 2.0.1: image

babinos87 avatar Jan 30 '24 09:01 babinos87