ingress-nginx icon indicating copy to clipboard operation
ingress-nginx copied to clipboard

Zero downtime upgrade

Open kamilzzz opened this issue 3 years ago • 32 comments

Docs (https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#upgrading-with-zero-downtime-in-production) are mentioning that by default during an upgrade there is a service interruption caused by pods being restarted, and another article is linked with explanation how one can fix that.

But when looking at values.yml I can see both terminationGracePerios and preStop hook is set correctly in order to drain connections.

https://github.com/kubernetes/ingress-nginx/blob/a7fb791132a0dee285b47d0041f4f9acf1e7cff8/charts/ingress-nginx/values.yaml#L237

https://github.com/kubernetes/ingress-nginx/blob/a7fb791132a0dee285b47d0041f4f9acf1e7cff8/charts/ingress-nginx/values.yaml#L583-L587

Is the documentation a little bit outdated or there is something else that I should set to get near zero downtime upgrades?

kamilzzz avatar Mar 04 '21 10:03 kamilzzz

I was actually wondering about the same thing, but when looking at the command which gets executed ("wait-shutdown") and looking at the corresponding code, I noticed that still a SIGTERM signal gets sent. The linked blog post says that a SIQQUIT signal needs to be used in order for a graceful shutdown. I wonder if changing the signal sent in the wait-shutdown command would already lead to zero downtime deployments?

thirdeyenick avatar May 20 '21 10:05 thirdeyenick

@thirdeyenick the SIGTERM is for the nginx controller process not nginx itself. Nginx controller issues SIGQUIT to the nginx process. https://github.com/kubernetes/ingress-nginx/blob/59a7f51cd4f0c41f214c520ca8b2a502946ed08e/internal/ingress/controller/nginx.go#L397

gamunu avatar May 23 '21 07:05 gamunu

Thank you very much for the information. This means that zero downtime deployments are already implemented, right?

thirdeyenick avatar May 25 '21 07:05 thirdeyenick

This means that zero downtime deployments are already implemented, right?

I'd like to see a confirmation from the team, but according to the linked sources it should already be implemented i think...

Maybe a pull request to remove it from the readme doc if that's the case

Baklap4 avatar Jun 11 '21 10:06 Baklap4

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 09 '21 11:09 k8s-triage-robot

/remove-lifecycle stale

jontro avatar Sep 21 '21 22:09 jontro

Does anyone know the answers to @Baklap4's questions, would be great to update the readme if this is the case. Especially useful for everyone deciding on running this controller in production environments.

svonliebenstein avatar Oct 11 '21 09:10 svonliebenstein

https://kubernetes.github.io/ingress-nginx/how-it-works/#when-a-reload-is-required

longwuyuan avatar Oct 11 '21 10:10 longwuyuan

@longwuyuan I'm not sure this answers the question. According to the readme there is a possibility for disruption when doing an upgrade. However this might not be the case and the doc just needs to be updated.

jontro avatar Oct 11 '21 14:10 jontro

There may be other factors at play. Say for example, you are exposing ingress-nginx out of the cluster via svc.type=loadbalancer. If also using metallb, the migration of the vip can cause connection issues.

kfox1111 avatar Oct 11 '21 16:10 kfox1111

There are fewer eyes here than on slack. So I suggest close this issue and come discuss this at kubernetes.slack.com in the ingress-nginx-users channel. There are seasoned developers and users there. This is a vast complicated topic that can not be easily triaged on github.

longwuyuan avatar Oct 12 '21 07:10 longwuyuan

Why would this be closed? The question is if the current documentation is up to date or not, because it looks like the documentation on zero downtime upgrades is out of date and that the required changes have since been incorporated into the chart.

rouke-broersma avatar Oct 12 '21 10:10 rouke-broersma

I think I listed all reasons that I could think of in the earlier comment. But apologies if that does not seem like a great idea. Just that tracking in one place would help. This is not a trivial matter and if one can fathom the intricacies, then a educated guess is also not out of reach. To begin with, a change in ingress object configuration can result in a need to reload the nginx.conf of the controller pod. Some other more elaborate text is at the link I posted earlier.

Keep this issue open if that seems to work well. Just note that there are many more developers and engineers on slack, so higher likelihood of getting comments on this, is on slack.

longwuyuan avatar Oct 12 '21 10:10 longwuyuan

If the current docs are not sufficient then perhaps they should be amended with the suggestion to carefully consider other facets of upgrades and to consult with experienced engineers on slack.

rouke-broersma avatar Oct 12 '21 12:10 rouke-broersma

@longwuyuan I have posted on slack, and happy to discuss there. However the readme refers to https://medium.com/codecademy-engineering/kubernetes-nginx-and-zero-downtime-in-production-2c910c6a5ed8 . All the steps already seems incorporated into the main chart already. There might be other factors in play but at least that problem is solved.

jontro avatar Oct 12 '21 13:10 jontro

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 10 '22 13:01 k8s-triage-robot

/remove-lifecycle stale

jontro avatar Jan 10 '22 14:01 jontro

/assign @theunrealgeek

strongjz avatar Mar 29 '22 16:03 strongjz

@strongjz: GitHub didn't allow me to assign the following users: theunrealgeek.

Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide

In response to this:

/assign @theunrealgeek

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 29 '22 16:03 k8s-ci-robot

/triage accepted /priority important-soon

iamNoah1 avatar Apr 12 '22 08:04 iamNoah1

Some information sharing. By default, when controller received SIGTERM, it will directly issue a SIGQUIT to nginx process without waiting(https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/controller/nginx.go#L373). I set --shutdown-grace-period to 15 and the downtime seems to be reduced.

heylongdacoder avatar May 30 '22 06:05 heylongdacoder

Yes @heylongdacoder and I think this value should be at least by default at 5s to mitigate common cases. The documentation should indicate that if there are any external load balancer directly connected to the pod, it should be increased to allow external draining before terminating nginx.

mtparet avatar May 30 '22 16:05 mtparet

On AWS load balancer, this must be 120s accordly to the support. We do not have issues since we set this value. https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2366#issuecomment-1118312709

mtparet avatar Jun 01 '22 10:06 mtparet

/help

iamNoah1 avatar Jun 14 '22 16:06 iamNoah1

@iamNoah1: This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jun 14 '22 16:06 k8s-ci-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 12 '22 17:09 k8s-triage-robot

/remove-lifecycle stale

jontro avatar Sep 13 '22 08:09 jontro

This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged. Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Deprioritize it with /priority important-longterm or /priority backlog
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-triage-robot avatar Feb 08 '23 00:02 k8s-triage-robot

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 08 '23 00:02 k8s-ci-robot

@thirdeyenick the SIGTERM is for the nginx controller process not nginx itself. Nginx controller issues SIGQUIT to the nginx process. ingress-nginx/internal/ingress/controller/nginx.go Line 397 in 59a7f51 cmd := n.command.ExecCommand("-s", "quit")

Doesn't look like this happens though, with this wait-shutdown the container immediately dies while still getting traffic which is not graceful at all, we had to change the preStop to sleep 30s to make all the upstream errors go away

Ghilteras avatar Jul 13 '23 17:07 Ghilteras