external-dns
external-dns copied to clipboard
fix: `TXT` record when killed at the wrong time
why?
TXT entries are very special because external-dns use it to "lock" the owner
currently, a DNS will be like that:
time="2023-12-16T16:50:40Z" level=info msg="Changing record." action=CREATE record=oops-test... ttl=1 type=CNAME zone=ff...
time="2023-12-16T16:50:41Z" level=info msg="Changing record." action=CREATE record=edns-cf-oops-test... ttl=1 type=TXT zone=ff..
time="2023-12-16T16:50:41Z" level=info msg="Changing record." action=CREATE record=edns-cf-cname-oops-test... ttl=1 type=TXT zone=ff...
most of the time it's ok but in some cases, sh*t happen and external-dns pod can be killed at the wrong time
and it can be annoying because if only the CNAME record is created, then external-dns can't update or remove it
how
In fact we have to managed two cases: creation & deletion
If we create TXT records in a first step during creation and remove TXT in a last step for deletion we can avoid it safely
Maybe this could be made also in other providers
Description
Fixes #ISSUE
Checklist
- [ ] Unit tests updated
- [ ] End user documentation updated
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign sheerun for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Hi @PascalBourdier. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hello @PascalBourdier,
Thanks for this interesting PR ! In order to keep this issue really fixed, we'll need a test case that fails without this change on the provider.
Hello @PascalBourdier,
Thanks for this interesting PR ! In order to keep this issue really fixed, we'll need a test case that fails without this change on the provider.
I don't know how to add this test case Could you help me ?
@PascalBourdier I think the bigger problem is, that you fix it only in one provider and maybe we should fix it in all at once. Did you ever observed such an issue or is it actually a guess?
@PascalBourdier I think the bigger problem is, that you fix it only in one provider and maybe we should fix it in all at once. Did you ever observed such an issue or is it actually a guess?
In fact, it is a first version and you're right we should do it for all providers I observed it on my side (on cloudflare and not on aws)
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/retitle fix: TXT record when killed at the wrong time
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale