airflow icon indicating copy to clipboard operation
airflow copied to clipboard

Tasks are in queued state for a longer time and executor slots are exhausted often

Open paramjeet01 opened this issue 10 months ago • 17 comments

Apache Airflow version

Other Airflow 2 version (please specify below)

If "Other Airflow 2 version" selected, which one?

2.8.3

What happened?

The tasks are in queued state for longer time than expected. This was working fine in 2.3.3 perfectly.

What you think should happen instead?

The tasks should be in running state instead being queued.

How to reproduce

Spin up more than 150 dag run in parallel and the tasks gets queued while it can execute in airflow 2.8.3

Operating System

Amazon Linux 2

Versions of Apache Airflow Providers

No response

Deployment

Official Apache Airflow Helm Chart

Deployment details

No response

Anything else?

No response

Are you willing to submit PR?

  • [ ] Yes I am willing to submit a PR!

Code of Conduct

paramjeet01 avatar Apr 12 '24 15:04 paramjeet01

Without any logs, errors, metrics or details it is impossible to (1) understand your problem and (2) fix anything.

Can you please describe more details?

jscheffl avatar Apr 12 '24 18:04 jscheffl

Apologies, I'm relatively new to Airflow. We've checked the scheduler logs thoroughly, and everything seems to be functioning correctly without any errors. Additionally, the scheduler pods are operating within normal CPU and memory limits. Our database, RDS, doesn't indicate any breaches either. Currently, we're running a DAG with 150 parallel DAG runs. However, a significant portion of tasks are remaining in a queued state for an extended period. Specifically, about 140 tasks are queued, while only 39 are actively running. I've already reviewed the configurations for max_active_tasks_per_dag and max_active_runs_per_dag, and they appear to be properly set. We did not face this issue in 2.3.3

paramjeet01 avatar Apr 12 '24 18:04 paramjeet01

Apologies, I'm relatively new to Airflow. We've checked the scheduler logs thoroughly, and everything seems to be functioning correctly without any errors. Additionally, the scheduler pods are operating within normal CPU and memory limits. Our database, RDS, doesn't indicate any breaches either. Currently, we're running a DAG with 150 parallel DAG runs. However, a significant portion of tasks are remaining in a queued state for an extended period. Specifically, about 140 tasks are queued, while only 39 are actively running. I've already reviewed the configurations for max_active_tasks_per_dag and max_active_runs_per_dag, and they appear to be properly set. We did not face this issue in 2.3.3

Can you try increasing the [scheduler]max_tis_per_query to 512? In one performance debugging, we discovered this to work better when increased but it might depend on the environment

ephraimbuddy avatar Apr 12 '24 19:04 ephraimbuddy

I have updated the config map with max_tis_per_query = 512 and redeployed the scheduler. Will monitor for some time and let you know , thanks for quick response.

paramjeet01 avatar Apr 12 '24 19:04 paramjeet01

@ephraimbuddy , The above config has improved the performance in scheduling the tasks and the gantt view shows the tasks queue time is lesser than before. Also , please share performance tuning documentation that will be really nice of you.

paramjeet01 avatar Apr 13 '24 06:04 paramjeet01

@paramjeet01 This might be helpful

https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/scheduler.html#fine-tuning-your-scheduler-performance

tirkarthi avatar Apr 13 '24 09:04 tirkarthi

@ephraimbuddy , Also saw that the dags were in scheduled state , after restarting the scheduler everything works fine now. Found that the executor was showing no open slots available, attaching the image of the metrics Screenshot 2024-04-13 at 7 13 55 PM

paramjeet01 avatar Apr 13 '24 09:04 paramjeet01

This is similar issue to #36998 , #36478

paramjeet01 avatar Apr 14 '24 05:04 paramjeet01

We got the same issues for twice. Same observation, this happened when executor open slots < 0. Screenshot 2024-04-14 at 14 21 36 cc @paramjeet01

changqian9 avatar Apr 14 '24 06:04 changqian9

@jscheffl , Can you remove the pending response label.

paramjeet01 avatar Apr 15 '24 12:04 paramjeet01

After reviewing various GitHub and Stack Overflow discussions, I've made updates to the following configuration and migrated to Airflow version 2.7.2 with apache-airflow-providers-cncf-kubernetes version 8.0.0 :

[scheduler]
task_queued_timeout : 90
max_dagruns_per_loop_to_schedule : 128
max_dagruns_to_create_per_loop : 128
max_tis_per_query : 1024

Disabled gitsync. Additionally, scaling the scheduler to 8 replicas has notably improved performance. The executor slots being exhausted was solved using max_tis_query to max number. Sorry I couldn't find the root cause of the issue but I hope this helps

paramjeet01 avatar Apr 16 '24 16:04 paramjeet01

After observing for some time, we encountered instances where the executor open slots were approaching negative values, leading to tasks becoming stuck in the scheduled state. Restarting all the scheduler pods solved this issue in airflow v2.8.3 , apache-airflow-providers-cncf-kubernetes v8.0.0 9EC59300-B161-4A66-9639-01CDA6ABAF71_1_201_a 9EC59300-B161-4A66-9639-01CDA6ABAF71_1_201_a

paramjeet01 avatar Apr 17 '24 15:04 paramjeet01

We have also observed that the pods are not cleaned up after completion of the task and all the pods are stuck in SUCCEEDED state.

paramjeet01 avatar Apr 18 '24 09:04 paramjeet01

Sorry , the above comment is false positive. We are customizing our KPO and we missed to add on_finish_action so the pods stuck in SUCCEEDED state. After adding it , all the pods are removed properly. We also able to mitigate the executor slots leak by adding a cronjob to restart our schedulers once in a while.

paramjeet01 avatar Apr 24 '24 19:04 paramjeet01

@paramjeet01 You can mention airflow num_runs configuration parameter to restart the scheduler container based on your needs. https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html

dirrao avatar May 11 '24 04:05 dirrao

This issue is related to watcher is not able to scale and process the events on time. This leads to so many completed pods over the time. related: https://github.com/apache/airflow/issues/22612

dirrao avatar May 11 '24 04:05 dirrao

@dirrao , airflow num_runs configuration parameter purpose has been changed a while ago AFAIK and cannot be used for restarting the scheduler. We have also removed run_duration which was later used for restarting the scheduler. https://airflow.apache.org/docs/apache-airflow/stable/release_notes.html#num-runs https://airflow.apache.org/docs/apache-airflow/stable/release_notes.html#remove-run-duration

paramjeet01 avatar May 11 '24 05:05 paramjeet01

If I understood this correctly, the performance issues with tasks in the queued state were mitigated by adjusting max_tis_per_query, scaling scheduler replicas, and implementing periodic scheduler restarts. @paramjeet01 tried periodic restarts of all scheduler pods to temporarily resolve the issue.

Related Issues: #36998, #22612

sunank200 avatar May 31 '24 07:05 sunank200

Can anyone try this patch https://github.com/apache/airflow/pull/40183 for the scheduler restarting issue?

ephraimbuddy avatar Jun 13 '24 09:06 ephraimbuddy