Ignore endpoints for ingresses that don't have status.LoadBalancer.Ingress defined
Description
If there's a hostname defined but there's no target to point it to, external-dns should skip it instead of deleting it. We have had a failing ingress controller that removed the status from the ingress object when it got OOMKilled. This resulted in external-dns deleting DNS entries even though the load balancer address was not changed.
Fixes #2677
Checklist
- [x] Unit tests updated
- [ ] End user documentation updated
The committers listed above are authorized under a signed CLA.
- :white_check_mark: login: jadolg / name: Jorge Alberto Díaz Orozco (Akiel) (2e5a46c9cd5d4fd197f349dedacec6428daba393, 63e9034b2488e135fe6b0b1487c62fa06ecc62f5, dbb416eb492705c43fbbb8c672e685ed83ee8ed3)
Welcome @jadolg!
It looks like this is your first PR to kubernetes-sigs/external-dns 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/external-dns has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
/assign @njuettner
Happy to someone finally found the bug which affect us for quite a long time!
Happy to someone finally found the bug which affect us for quite a long time!
This took us some days until we figured out what was going on. But given the impact this was well worth the effort. For us it resulted in outages of 15 minutes for all services on the cluster, because our TTL on the DNS was 15 minutes. All DNS entries got deleted and recreated when this happened.
Luckily the cluster is not fully in production yet.
My guess is that the --upsert-only option was a mitigation effort for this bug?
Could we get another review, please?
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: jadolg, kdomanski To complete the pull request process, please ask for approval from njuettner after the PR has been reviewed.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/approve cancel
I would say it works like intended. @jadolg why do you think it's external-dns that should work around an ingress controller bug?
I would say it works like intended. @jadolg why do you think it's external-dns that should work around an ingress controller bug?
This makes it especially resilient. It would make it so that if for whatever reason the ingress controller misbehaves (which already happened to us), external-dns will not go on deleting records. We have already patched the ingress controller for the specific error we found but that doesn't guarantee the same situation won't happen again with another root cause and this will help us be prepared for it. It basically is preparing the system for a failure scenario so it "acts cool" instead of chaining the error.
I would say it works like intended. @jadolg why do you think it's external-dns that should work around an ingress controller bug?
This makes it especially resilient. It would make it so that if for whatever reason the ingress controller misbehaves (which already happened to us), external-dns will not go on deleting records. We have already patched the ingress controller for the specific error we found but that doesn't guarantee the same situation won't happen again with another root cause and this will help us be prepared for it. It basically is preparing the system for a failure scenario so it "acts cool" instead of chaining the error.
Your change breaks migration cases and we can't save every bug by other controllers and IMO we should not do this. For me it's a clear architectural no go to do this.
Give me an example. If you want to migrate the traffic from A to B (maybe EKS to GKE), you would create the B cluster populate the records from B, maybe you want to stay with ownership DNS in C. You want that the DNS record pointing to A is deleted, but that the data plane in A don't just drops traffic. This is a "we want to have" case and reflects exactly the state of the objects you show. This is why I argue it's by intention designed like this.
I agree with @szuecs here: we shouldn't be supporting new flags and features for something that result from issues with a controller. We already support upsert only mode which is what I would recommend using if you don't trust components of your infrastructure enough to do deletions.
Upsert mode would create a mess in the DNS with constant deployments ending up in an almost ummantainable list of records.
In this migration example, I think the source of truth should be the ingress configuration and not its status. The strategy for moving traffic from A to B should not be deleting the status of the ingress.
I think there's a clear statement by maintainers that think differently. It's a bug in ingress controllers that orchestrate us incorrectly and you can use upsert to circumvent your stated problem. The contract is spec to status in external-dns.
@jadolg: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.