external-dns
external-dns copied to clipboard
TXT records with multiple targets are not handled properly
What happened:
I have multiple TXT records for one of my domains, some of them are not managed by external-dns
TXT registry. This results in TXT endpoints with several targets coming from the DNS provider, but TXT registry just takes the first target.
https://github.com/kubernetes-sigs/external-dns/blob/master/registry/txt.go#L107
This results in either:
- A record not updated by
external-dns
as it fails to "see" the TXT record it looks for, as the target is not the first one in the list - or TXT record not related to
external-dns
removed byexternal-dns
ifexternal-dns
managed TXT target is the first one in the list
What you expected to happen: TXT registry should iterate over TXT targets instead of just picking the first one.
How to reproduce it (as minimally and precisely as possible):
Create a TXT record in addition to the one created by external-dns
.
Environment:
- External-DNS version (use
external-dns --version
): 0.11.1 - DNS provider: DigitalOcean
- Others:
I think I see this as well in AWS.
I have a zone with multiple, existing TXT entries on the root domain:
{
"Name": "example.com.",
"Type": "TXT",
"TTL": 86400,
"ResourceRecords": [
{
"Value": "\"v=spf1 a mx a:completeupdates.com ~all\""
},
{
"Value": "\"google-site-verification=pKt0HRP...aomM\""
},
{
"Value": "\"google-site-verification=wsObL_...3gWME-Bj5E\""
}
]
},
and in the updates it's trying to make I get this in the log:
time="2022-07-30T00:59:57Z" level=info msg="Desired change: CREATE example.com A [Id: /hostedzone/ZZZZZZZZZZZ]"
time="2022-07-30T00:59:57Z" level=info msg="Desired change: CREATE example.com TXT [Id: /hostedzone/ZZZZZZZZZZZ]"
time="2022-07-30T00:59:57Z" level=error msg="Failure in zone example.com. [Id: /hostedzone/ZZZZZZZZZZZ]"
time="2022-07-30T00:59:57Z" level=error msg="InvalidChangeBatch: [Tried to create resource record set [name='example.com.', type='TXT'] but it already exists]\n\tstatus code: 400, request id: f31254aa-a72f-4e80-8008-c6fdda55774e"
I believe I am also seeing this issue, or one quite similar. If we have a TXT
record created ahead of time (for example, as required by mailgun, when our application needs to send mail, TXT myapp v=spf1 include:mailgun.org ~all
), external-dns
reports,
level=warning msg="Preexisting records exist which should not exist for creation actions." dnsName=myapp.domain.com domain=domain.com recordType=TXT
and will continue to create new TXT
records containing "heritage" information indefinitely at every polling loop.
I'm seeing this issue using:
Environment: External-DNS version (use external-dns --version): 0.13.1 DNS provider: DigitalOcean
I only have one nginx ingress.
spec: ingressClassName: nginx rules:
- host: api.mydns.com
http:
paths:
- backend: service: name: myservice port: number: 8000 path: / pathType: Prefix
The dns is created correctly but on every loop it creates another TXT
record and complains that one already exists.
I found this workaround:
https://github.com/kubernetes-sigs/external-dns/issues/449#issuecomment-1211191200
Using a --text-prefix that helps disambiguate and separate these out, it creates predictability here (and a few more records).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.