flower
flower copied to clipboard
Flower loads indefinitely
Describe the bug http://localhost:5557/ loads indefinitely despite flower starting as expected
To Reproduce Steps to reproduce the behavior:
- Start flower
celery -A project_name \
--broker="redis://redis:6379/0" \
flower
- Navigate to http://localhost:5557/
- Page is stuck in infinite loading and remains blank
flower is being launched as part of a docker-compose django application with 6 other containers (inc redis, beat and celery), which are all starting as expected.
Expected behavior Expecting flower dashboard to load which does not happen
running ps aux
inside the container I can see that flower is running:
root@36ea937babc1:/app# ps aux | grep flower
root 1 0.0 0.0 5488 3244 ? Ss 16:34 0:00 /bin/bash /start-flower
root 111 9.5 0.8 1807884 283864 ? Sl 16:35 0:26 /usr/local/bin/python /usr/local/bin/celery -A project_name --broker=redis://redis:6379/0 flower --basic_auth=admin:admin --loglevel=DEBUG
root 156 0.0 0.0 4836 892 pts/0 S+ 16:40 0:00 grep flower
Netstat also looks healthy:
root@36ea937babc1:/app# netstat -tuln | grep 5555
tcp 1 0 0.0.0.0:5555 0.0.0.0:* LISTEN
tcp6 0 0 :::5555 :::* LISTEN
System information
root@9b4a75cf222c:/app# python
Python 3.11.4 (main, Jun 13 2023, 15:34:37) [GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from flower.utils import bugreport; print(bugreport())
flower -> flower:2.0.1 tornado:6.3.3 humanize:4.8.0
software -> celery:5.3.4 (emerald-rush) kombu:5.3.2 py:3.11.4
billiard:4.2.0 py-amqp:5.2.0
platform -> system:Linux arch:64bit
kernel version:5.15.133.1-microsoft-standard-WSL2 imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:disabled
deprecated_settings: None
Flower logs:
PostgreSQL is available
2023-12-27T16:13:54.392428992Z LOG_LEVEL is set to: DEBUG
2023-12-27T16:13:56.941536552Z reading envs from local .env/.dev-sample file
2023-12-27T16:13:57.969751554Z Error: No nodes replied within time constraint
2023-12-27T16:13:58.348600138Z Celery workers not available
2023-12-27T16:14:01.624674403Z reading envs from local .env/.dev-sample file
2023-12-27T16:14:02.647502284Z Error: No nodes replied within time constraint
2023-12-27T16:14:02.945831862Z Celery workers not available
2023-12-27T16:14:06.075034301Z reading envs from local .env/.dev-sample file
2023-12-27T16:14:06.098401611Z -> celery@adda0216ba83: OK
2023-12-27T16:14:06.098426286Z pong
2023-12-27T16:14:07.102304098Z
2023-12-27T16:14:07.102352357Z 1 node online.
2023-12-27T16:14:07.413457500Z Celery workers is available
2023-12-27T16:14:08.759771261Z [I 231227 16:14:08 utils:148] Note: NumExpr detected 16 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
2023-12-27T16:14:08.759827480Z [I 231227 16:14:08 utils:160] NumExpr defaulting to 8 threads.
2023-12-27T16:14:09.527725149Z reading envs from local .env/.dev-sample file
2023-12-27T16:14:12.088229507Z 27-12-23 16:14:12.087 - INFO: Visit me at http://0.0.0.0:5555 (command.py:168)
2023-12-27T16:14:12.097123388Z 27-12-23 16:14:12.096 - INFO: Broker: redis://redis:6379/0 (command.py:176)
2023-12-27T16:14:12.259837427Z 27-12-23 16:14:12.259 - INFO: Registered tasks:
2023-12-27T16:14:12.259866198Z ['celery.accumulate',
2023-12-27T16:14:12.259870564Z 'celery.backend_cleanup',
2023-12-27T16:14:12.259872604Z 'celery.chain',
2023-12-27T16:14:12.259874574Z 'celery.chord',
2023-12-27T16:14:12.259876489Z 'celery.chord_unlock',
2023-12-27T16:14:12.259878651Z 'celery.chunks',
2023-12-27T16:14:12.259880566Z 'celery.group',
2023-12-27T16:14:12.259882466Z 'celery.map',
2023-12-27T16:14:12.259884382Z 'celery.starmap',
2023-12-27T16:14:12.259919807Z 'project_name.celery.divide',
2023-12-27T16:14:12.259921830Z 'project_name.celery.sleepy_task_test'] (command.py:177)
2023-12-27T16:14:12.272942524Z 27-12-23 16:14:12.272 - INFO: Connected to redis://redis:6379/0 (mixins.py:228)
The above issue only occurs on python >3.9 - is this expected?