external-dns
external-dns copied to clipboard
v0.13.5 fails to ignore unowned wildcard record
v0.13.4 works as expected v0.13.5 fails It's difficult to tell from the logs whether the '*' in domain is the culprit or it's change in the label processing.
What happened: process tried to recreate records that already existed
What you expected to happen: process should not recreate records that it does not own
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?: v0.13.4 logs:
time="2023-06-13T23:54:46Z" level=debug msg="Refreshing zones list cache"
time="2023-06-13T23:54:46Z" level=debug msg="Considering zone: /hostedzone/Z0553059GUYPUCCIA2TW (domain: mis.lab.example.com.)"
time="2023-06-13T23:54:46Z" level=info msg="Applying provider record filter for domains: [mis.lab.example.com. .mis.lab.example.com.]"
time="2023-06-13T23:54:46Z" level=debug msg="Skipping endpoint *.mis.lab.example.com 0 IN CNAME internal-k8s-internalunifiedmi-8433005b08-1538250024.us-east-1.elb.amazonaws.com [{alias true} {aws/evaluate-target-health true}] because owner id does not match, found: \"terraform\", required: \"use-feature\""
time="2023-06-13T23:54:46Z" level=debug msg="Skipping endpoint *.mis.lab.example.com 30 IN A 127.0.53.53 [] because owner id does not match, found: \"terraform\", required: \"use-feature\""
time="2023-06-13T23:54:46Z" level=debug msg="Refreshing zones list cache"
time="2023-06-13T23:54:46Z" level=debug msg="Considering zone: /hostedzone/Z0553059GUYPUCCIA2TW (domain: mis.lab.example.com.)"
time="2023-06-13T23:54:46Z" level=info msg="All records are already up to date"
v0.13.5 logs:
time="2023-06-13T23:56:05Z" level=debug msg="Refreshing zones list cache"
time="2023-06-13T23:56:05Z" level=debug msg="Considering zone: /hostedzone/Z0553059GUYPUCCIA2TW (domain: mis.lab.example.com.)"
time="2023-06-13T23:56:05Z" level=info msg="Applying provider record filter for domains: [mis.lab.example.com. .mis.lab.example.com.]"
time="2023-06-13T23:56:05Z" level=debug msg="Refreshing zones list cache"
time="2023-06-13T23:56:05Z" level=debug msg="Considering zone: /hostedzone/Z0553059GUYPUCCIA2TW (domain: mis.lab.example.com.)"
time="2023-06-13T23:56:05Z" level=debug msg="Adding *.mis.lab.example.com. to zone mis.lab.example.com. [Id: /hostedzone/Z0553059GUYPUCCIA2TW]"
time="2023-06-13T23:56:05Z" level=debug msg="Adding *.mis.lab.example.com. to zone mis.lab.example.com. [Id: /hostedzone/Z0553059GUYPUCCIA2TW]"
time="2023-06-13T23:56:05Z" level=debug msg="Adding cname-*.mis.lab.example.com. to zone mis.lab.example.com. [Id: /hostedzone/Z0553059GUYPUCCIA2TW]"
time="2023-06-13T23:56:05Z" level=info msg="Desired change: CREATE *.mis.lab.example.com A [Id: /hostedzone/Z0553059GUYPUCCIA2TW]"
time="2023-06-13T23:56:05Z" level=info msg="Desired change: CREATE *.mis.lab.example.com TXT [Id: /hostedzone/Z0553059GUYPUCCIA2TW]"
time="2023-06-13T23:56:05Z" level=info msg="Desired change: CREATE cname-*.mis.lab.example.com TXT [Id: /hostedzone/Z0553059GUYPUCCIA2TW]"
time="2023-06-13T23:56:05Z" level=error msg="Failure in zone mis.lab.example.com. [Id: /hostedzone/Z0553059GUYPUCCIA2TW] when submitting change batch: InvalidChangeBatch: [Tried to create resource record set [name='\\052.mis.lab.example.com.', type='A'] but it already exists, Tried to create resource record set [name='\\052.mis.lab.example.com.', type='TXT'] but it already exists, Tried to create resource record set [name='cname-\\052.mis.lab.example.com.', type='TXT'] but it already exists]\n\tstatus code: 400, request id: ec19f6e5-0663-450b-a7a6-530a5de63c1c"
time="2023-06-13T23:56:06Z" level=fatal msg="failed to submit all changes for the following zones: [/hostedzone/Z0553059GUYPUCCIA2TW]"
Environment:
- External-DNS version (use
external-dns --version): v0.13.5 - DNS provider: aws
- Others:
You need to use the --txt-wildcard-replacement flag with that provider.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.