Non-wildcard DNS record is unmanaged when combining with wildcard subdomain
What happened: When defining both a non-wildcard record with a wilcard one, both DNS records will be created but only the wildcard version will continue to be maintained by external-dns.
e.g. teleport requires both sub.domain.tld and *.sub.domain.tld to be defined, so they are for the LoadBalancer k8s service:
external-dns.alpha.kubernetes.io/hostname: "teleport.domain.tld,*.teleport.domain.tld"
external-dns then creates both A records that point to the correct service IP:
*.teleport.domain.tld. 1.1.1.1
teleport.domain.tld. 1.1.1.1
It also creates the following TXT records:
txt-*.teleport.domain.tld. "heritage=external-dns,external-dns/owner=project-id,external-dns/resource=service/teleport/teleport-cluster"
txt-a-*.teleport.domain.tld. "heritage=external-dns,external-dns/owner=project-id,external-dns/resource=service/teleport/teleport-cluster"
Now if the LB service IP changes, only the wildcard A record will be updated:
*.teleport.domain.tld. 1.1.1.2
teleport.domain.tld. 1.1.1.1
What you expected to happen: Both records to be kept in sync
How to reproduce it (as minimally and precisely as possible):
- Have a k8s cluster with proper DNS management permissions for external-dns to work
- Create a DNS zone
teleport.domain.tldin Google DNS - Deploy a LoadBalancer service with annotation
external-dns.alpha.kubernetes.io/hostname: "teleport.domain.tld,*.teleport.domain.tld" - Wait until DNS records are updated
- Recreate the LB service so the IP changes
- Witness issue in DNS records
Anything else we need to know?: Logs (note, the IPs have been changed to match the example):
time="x" level=debug msg="Skipping endpoint teleport.domain.tld 0 IN A 1.1.1.2 [] because owner id does not match, found: \"\", required: \"project-id\""
time="x" level=debug msg="Skipping endpoint teleport.domain.tld 300 IN A 1.1.1.1 [] because owner id does not match, found: \"\", required: \"project-id\""
Environment:
- External-DNS version (use
external-dns --version): 0.13.4 - DNS provider: Google DNS
- Others:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
i have a similar issue. is there any solution for this?
i removed txt-prefix which helps. only complains about a-foo.bar.com zone not existing now
now it is looping between deleting and adding
resolved by keeping txt-prefix and creating 2 new zones for me
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale