charts icon indicating copy to clipboard operation
charts copied to clipboard

how to upgrade the airflow cluster while keep my airflow jobs running with no downtime

Open zeddit opened this issue 1 year ago • 3 comments

Checks

Motivation

It's a common case for upgrading the airflow, e.g. installing a new 3rd party provider's package and upgrade the airflow major version. All these processes above will involve an upgrade to the airflow image running on k8s. which needs to shutdown the related pods and start a new one with the new version.

And this kind of shutdowns will make the jobs running on top of airflow stopped and may be not recovered because it will stop at a middle place in DAG and marked as success.

How to make airflow helm upgrade do not influence the jobs running and keep the result of job executions correct?

Implementation

No response

Are you willing & able to help?

  • [ ] I am able to submit a PR!
  • [ ] I can help test the feature!

zeddit avatar Mar 22 '24 08:03 zeddit

@zeddit Because changing the airflow image will always require a restart of all worker pods, your only options are something like:

  1. design your DAGs so they are able to recover safely if they are interrupted (which is a good idea, because servers crash, and other things can happen, even when you are not updating)
  2. if you want to prevent Kubernetes from restarting a worker that has any running tasks, you can use the workers.celery.gracefullTermination values, but this only affects restarts caused by a StatefulSet rollout. (Also some Kubernetes providers, put a limit on how long gracefullTerminationPeriod can be, e.g. GKE limits it to 10 minutes)

thesuperzapper avatar Apr 24 '24 18:04 thesuperzapper

@thesuperzapper really appreciate for your reply, and I learned a lot.

could I understand that as, if I use KubernetesExecutor, only worker pods will be restarted and they may restart at any time point by being terminated. While other pods like database, scheduler, ui will not restart if I updated the airflow image like apache-airflow:python3.8. and this is the regular update.

what if I need to upgrade scheduler and other components, will that break the correctness of jobs if I followed the dag principle stated above? great thanks

zeddit avatar May 14 '24 05:05 zeddit

having same concern on this

ahdeanlau avatar Aug 27 '24 04:08 ahdeanlau

This issue has been automatically marked as stale because it has not had activity in 60 days. It will be closed in 7 days if no further activity occurs.

Thank you for your contributions.


Issues never become stale if any of the following is true:

  1. they are added to a Project
  2. they are added to a Milestone
  3. they have the lifecycle/frozen label

stale[bot] avatar Jan 31 '25 21:01 stale[bot]