ingress-nginx
ingress-nginx copied to clipboard
Zero downtime upgrade
Docs (https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#upgrading-with-zero-downtime-in-production) are mentioning that by default during an upgrade there is a service interruption caused by pods being restarted, and another article is linked with explanation how one can fix that.
But when looking at values.yml
I can see both terminationGracePerios and preStop hook is set correctly in order to drain connections.
https://github.com/kubernetes/ingress-nginx/blob/a7fb791132a0dee285b47d0041f4f9acf1e7cff8/charts/ingress-nginx/values.yaml#L237
https://github.com/kubernetes/ingress-nginx/blob/a7fb791132a0dee285b47d0041f4f9acf1e7cff8/charts/ingress-nginx/values.yaml#L583-L587
Is the documentation a little bit outdated or there is something else that I should set to get near zero downtime upgrades?
I was actually wondering about the same thing, but when looking at the command which gets executed ("wait-shutdown") and looking at the corresponding code, I noticed that still a SIGTERM signal gets sent. The linked blog post says that a SIQQUIT signal needs to be used in order for a graceful shutdown. I wonder if changing the signal sent in the wait-shutdown command would already lead to zero downtime deployments?
@thirdeyenick the SIGTERM is for the nginx controller process not nginx itself. Nginx controller issues SIGQUIT to the nginx process. https://github.com/kubernetes/ingress-nginx/blob/59a7f51cd4f0c41f214c520ca8b2a502946ed08e/internal/ingress/controller/nginx.go#L397
Thank you very much for the information. This means that zero downtime deployments are already implemented, right?
This means that zero downtime deployments are already implemented, right?
I'd like to see a confirmation from the team, but according to the linked sources it should already be implemented i think...
Maybe a pull request to remove it from the readme doc if that's the case
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Does anyone know the answers to @Baklap4's questions, would be great to update the readme if this is the case. Especially useful for everyone deciding on running this controller in production environments.
https://kubernetes.github.io/ingress-nginx/how-it-works/#when-a-reload-is-required
@longwuyuan I'm not sure this answers the question. According to the readme there is a possibility for disruption when doing an upgrade. However this might not be the case and the doc just needs to be updated.
There may be other factors at play. Say for example, you are exposing ingress-nginx out of the cluster via svc.type=loadbalancer. If also using metallb, the migration of the vip can cause connection issues.
There are fewer eyes here than on slack. So I suggest close this issue and come discuss this at kubernetes.slack.com in the ingress-nginx-users channel. There are seasoned developers and users there. This is a vast complicated topic that can not be easily triaged on github.
Why would this be closed? The question is if the current documentation is up to date or not, because it looks like the documentation on zero downtime upgrades is out of date and that the required changes have since been incorporated into the chart.
I think I listed all reasons that I could think of in the earlier comment. But apologies if that does not seem like a great idea. Just that tracking in one place would help. This is not a trivial matter and if one can fathom the intricacies, then a educated guess is also not out of reach. To begin with, a change in ingress object configuration can result in a need to reload the nginx.conf of the controller pod. Some other more elaborate text is at the link I posted earlier.
Keep this issue open if that seems to work well. Just note that there are many more developers and engineers on slack, so higher likelihood of getting comments on this, is on slack.
If the current docs are not sufficient then perhaps they should be amended with the suggestion to carefully consider other facets of upgrades and to consult with experienced engineers on slack.
@longwuyuan I have posted on slack, and happy to discuss there. However the readme refers to https://medium.com/codecademy-engineering/kubernetes-nginx-and-zero-downtime-in-production-2c910c6a5ed8 . All the steps already seems incorporated into the main chart already. There might be other factors in play but at least that problem is solved.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/assign @theunrealgeek
@strongjz: GitHub didn't allow me to assign the following users: theunrealgeek.
Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign @theunrealgeek
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/triage accepted /priority important-soon
Some information sharing. By default, when controller received SIGTERM, it will directly issue a SIGQUIT to nginx process without waiting(https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/controller/nginx.go#L373). I set --shutdown-grace-period to 15 and the downtime seems to be reduced.
Yes @heylongdacoder and I think this value should be at least by default at 5s to mitigate common cases. The documentation should indicate that if there are any external load balancer directly connected to the pod, it should be increased to allow external draining before terminating nginx.
On AWS load balancer, this must be 120s accordly to the support. We do not have issues since we set this value. https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2366#issuecomment-1118312709
/help
@iamNoah1: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This issue is labeled with priority/important-soon
but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Deprioritize it with
/priority important-longterm
or/priority backlog
- Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@thirdeyenick the SIGTERM is for the nginx controller process not nginx itself. Nginx controller issues SIGQUIT to the nginx process. ingress-nginx/internal/ingress/controller/nginx.go Line 397 in 59a7f51 cmd := n.command.ExecCommand("-s", "quit")
Doesn't look like this happens though, with this wait-shutdown
the container immediately dies while still getting traffic which is not graceful at all, we had to change the preStop
to sleep 30s to make all the upstream errors go away