django-celery-beat
django-celery-beat copied to clipboard
expire_seconds actually create a new task with a new expire date
Summary
expire_seconds create a new task anyway and never stop or delete the periodic task.
- Celery Version: v5.0.5
- Celery-Beat Version: 2.2.0
Exact steps to reproduce the issue:
Create a periodic task with the expire_seconds
self.periodic_task = PeriodicTask.objects.create(
interval=schedule,
expire_seconds=10,
name='job_status_check_request_{}'.format(self.id),
task='tasks.dummy_task',
args=json.dumps([self.id]))
Detailed information
Here is the Celery output
INFO 2021-03-26 15:00:54,929 schedulers DatabaseScheduler: Schedule changed.
INFO 2021-03-26 15:01:04,947 beat Scheduler: Sending due task job_status_check_request_8 (service_catalog.tasks.check_tower_job_status_task)
INFO 2021-03-26 15:01:04,953 strategy Received task: service_catalog.tasks.check_tower_job_status_task[0ac4526a-8a2b-497a-abb5-58a436d10086] expires:[2021-03-26 15:01:34.947609+00:00]
INFO 2021-03-26 15:01:04,955 tasks [check_tower_job_status_task] check Tower job status for request id: 8
INFO 2021-03-26 15:01:07,475 trace Task service_catalog.tasks.check_tower_job_status_task[0ac4526a-8a2b-497a-abb5-58a436d10086] succeeded in 2.520178664000923s: None
INFO 2021-03-26 15:01:14,941 beat Scheduler: Sending due task job_status_check_request_8 (service_catalog.tasks.check_tower_job_status_task)
INFO 2021-03-26 15:01:14,942 strategy Received task: service_catalog.tasks.check_tower_job_status_task[2f067860-af77-406d-ba80-477b177ae1b8] expires:[2021-03-26 15:01:44.941405+00:00]
INFO 2021-03-26 15:01:14,943 tasks [check_tower_job_status_task] check Tower job status for request id: 8
INFO 2021-03-26 15:01:17,441 trace Task service_catalog.tasks.check_tower_job_status_task[2f067860-af77-406d-ba80-477b177ae1b8] succeeded in 2.4983104269995238s: None
INFO 2021-03-26 15:01:24,942 beat Scheduler: Sending due task job_status_check_request_8 (service_catalog.tasks.check_tower_job_status_task)
INFO 2021-03-26 15:01:24,944 strategy Received task: service_catalog.tasks.check_tower_job_status_task[1c48ea36-41df-4ca4-95e5-6222f109a767] expires:[2021-03-26 15:01:54.942729+00:00]
INFO 2021-03-26 15:01:24,945 tasks [check_tower_job_status_task] check Tower job status for request id: 8
INFO 2021-03-26 15:01:27,413 trace Task service_catalog.tasks.check_tower_job_status_task[1c48ea36-41df-4ca4-95e5-6222f109a767] succeeded in 2.4681766869998683s: None
We can observe that a new task is send to Celery with a new expire date
- expires:[2021-03-26 15:01:34.947609+00:00]
- expires:[2021-03-26 15:01:44.941405+00:00]
- expires:[2021-03-26 15:01:54.942729+00:00]
Following the doc the flag is "Timedelta with seconds which the schedule will no longer trigger the task to run"
Maybe I missed something.
As a workaround I tried with
expires=datetime.utcnow() + timedelta(seconds=self.operation.process_timeout_second)
But it seems that the task is "disabled" and not removed.
INFO 2021-03-26 14:48:54,625 strategy Received task: service_catalog.tasks.check_tower_job_status_task[f03ca55b-0b4e-4624-859b-0c51714ed4c9] expires:[2021-03-26 14:48:05.905771+00:00]
INFO 2021-03-26 14:48:54,625 request Discarding revoked task: service_catalog.tasks.check_tower_job_status_task[f03ca55b-0b4e-4624-859b-0c51714ed4c9]
Maybe I need to play with signals here.
were you able to get pass this?
celery v5.2.3 django-celery-beat v2.2.1
Same story... If I set expires:
[2022-03-16 11:08:00,218: INFO/MainProcess] Task check_task_done[18191525-5e5f-4038-bde5-74ab4dc6fe12] received
[2022-03-16 11:08:00,218: INFO/MainProcess] Discarding revoked task: check_task_done[18191525-5e5f-4038-bde5-74ab4dc6fe12]
[2022-03-16 11:08:05,230: INFO/MainProcess] Task check_task_done[521a1970-116a-4c02-b551-be5eaa89cec7] received
[2022-03-16 11:08:05,230: INFO/MainProcess] Discarding revoked task: check_task_done[521a1970-116a-4c02-b551-be5eaa89cec7]
...
If I set expire_seconds:
[2022-03-16 11:11:03,727: INFO/ForkPoolWorker-4] Task check_task_done[7ef90178-985f-40f5-874d-6d5a1bd126e4] succeeded in 0.029972598000313155s: None
[2022-03-16 11:11:08,708: INFO/MainProcess] Task check_task_done[8da98709-a0c8-41fd-bdd3-f5a8ad902990] received
[2022-03-16 11:11:08,734: INFO/ForkPoolWorker-4] Task check_task_done[8da98709-a0c8-41fd-bdd3-f5a8ad902990] succeeded in 0.02302194900403265s: None
[2022-03-16 11:11:13,722: INFO/MainProcess] Task check_task_done[08cd2c60-2c15-4cbf-a56e-c15f2f134203] received
...
expires was set to 2022-03-16 12:39:30
[2022-03-16 12:39:25,879: INFO/MainProcess] Scheduler: Sending due task Reservation 29 (check_task_done)
[2022-03-16 12:39:30,891: INFO/MainProcess] Scheduler: Sending due task Reservation 29 (check_task_done)
[2022-03-16 12:39:30,892: WARNING/MainProcess] 31030017-80cf-47e2-97f3-2417f6ef271b has an expiration date in the past (0.892144s ago).
We assume this is intended and so we have set the expiration date to 0 instead.
According to RabbitMQ's documentation:
"Setting the TTL to 0 causes messages to be expired upon reaching a queue unless they can be delivered to a consumer immediately."
If this was unintended, please check the code which published this task.
[2022-03-16 12:39:35,901: INFO/MainProcess] Scheduler: Sending due task Reservation 29 (check_task_done)
...
Hi. We have same problem with django-celery-beat==2.3.0 and celery[redis]==5.2.7. Is there any workaround like setting global maximum expiration time?