external-dns
external-dns copied to clipboard
AWS Route 53 entries with wildcard "*" are created by external-dns the first time, but every time after that it says "Failed submitting change"
What happened:
Using external-dns chart version 6.28.4, app version 0.14.0.
We are creating AWS Route 53 entries that begin with a wildcard like *.example.whatever.com
external-dns succeeds the first time. It successfully creates an A record, a TXT record, and another TXT record like cname-*.example.whatever.com.
However the next time external-dns runs, a minute later, it prints an error message about that same record:
level=error msg="Failed submitting change (error: InvalidChangeBatch: [Tried to create resource record set [name='\\052.example.whatever.com.', type='A', set-identifier='example'] but it already exists, Tried to create resource record set [name='\\052.example.whatever.com.', type='TXT', set-identifier='example'] but it already exists, Tried to create resource record set [name='cname-\\052.example.whatever.com.', type='TXT', set-identifier='example'] but it already exists]\n\tstatus code: 400, request id: 03672723-3c4c-4b4e-acbc-c4074f620dc4), it will be retried in a separate change batch in the next iteration"
So external-dns successfully makes the records once, but then complains about those same records every run after that.
What you expected to happen:
No error messages.
How to reproduce it (as minimally and precisely as possible):
Helm chart ingress:
external-dns.alpha.kubernetes.io/aws-weight=100
external-dns.alpha.kubernetes.io/hostname=\*.example.whatever.com, \*.example2.whatever.com
external-dns.alpha.kubernetes.io/set-identifier=example
Anything else we need to know?:
Environment:
- External-DNS version (use
external-dns --version): 0.14.0 - DNS provider: AWS Route 53
- Others:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Experiencing this issue even with the escaped format:
external-dns.alpha.kubernetes.io/hostname: \*.foo.com
external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only
external-dns.alpha.kubernetes.io/hostname: \052.foo.com
external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only
Both give the same error:
time="2024-04-19T02:46:32Z" level=info msg="Desired change: CREATE cname-\\052.foo.com TXT [Id: /hostedzone/Z118**********]"
time="2024-04-19T02:46:32Z" level=error msg="Failure in zone foo.com. [Id: /hostedzone/Z118**********] when submitting change batch: InvalidChangeBatch: [Tried to create resource record set [name='\\052.foo.com.', type='A'] but it already exists, Tried to create resource record set [name='\\052.foo.com.', type='TXT'] but it already exists, Tried to create resource record set [name='cname-\\052.foo.com.', type='TXT'] but it already exists]\n\tstatus code: 400, request id: 00d6796d-659b-4dbc-b38d-1bb7316efebd"
time="2024-04-19T02:46:33Z" level=error msg="Failed to do run once: soft error\nfailed to submit all changes for the following zones: [/hostedzone/Z118**********]"
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.