external-dns
external-dns copied to clipboard
AAAA records are being deleted and added on every run
What happened:
external-dns is constantly deleting and re-adding AAAA records of the ingress objects, even though there are no changes made. Only one instance of external-dns is running on GKE and managing the DNS records. Services are running behind ingress-nginx LoadBalancer service, which is in DualStack (IPv4 and IPv6) mode.
external-dns-cd777d94d-p6v66 external-dns time="2023-09-22T11:38:33Z" level=debug msg="Endpoints generated from ingress: default/svcname: [example.org 0 IN A 1.1.1.1 [] example.org 0 IN AAAA 2600:1900::0 [] example.org 0 IN A 1.1.1.1 [] example.org 0 IN AAAA 2600:1900::0 []]"
external-dns-cd777d94d-p6v66 external-dns time="2023-09-22T11:38:33Z" level=debug msg="Removing duplicate endpoint example.org 0 IN A 1.1.1.1 []"
external-dns-cd777d94d-p6v66 external-dns time="2023-09-22T11:38:33Z" level=debug msg="Removing duplicate endpoint example.org 0 IN AAAA 2600:1900::0 []"
external-dns-cd777d94d-p6v66 external-dns time="2023-09-22T11:38:34Z" level=info msg="Del records: aaaa-example.org. TXT [\"heritage=external-dns,external-dns/owner=gke-production,external-dns/resource=ingress/default/svcname\"] 300"
external-dns-cd777d94d-p6v66 external-dns time="2023-09-22T11:38:34Z" level=info msg="Del records: example.org. AAAA [2600:1900::0] 300"
external-dns-cd777d94d-p6v66 external-dns time="2023-09-22T11:38:34Z" level=info msg="Add records: aaaa-example.org. TXT [\"heritage=external-dns,external-dns/owner=gke-production,external-dns/resource=ingress/default/svcname\"] 300"
external-dns-cd777d94d-p6v66 external-dns time="2023-09-22T11:38:34Z" level=info msg="Add records: example.org. AAAA [2600:1900::0] 300"
What you expected to happen:
No changes are made to the DNS records.
How to reproduce it (as minimally and precisely as possible):
- Run ingress-nginx with dual-stack loadbalancer.
- Run external-dns to Google Cloud DNS.
- Add an ingress object.
Anything else we need to know?:
Environment:
- External-DNS version (use
external-dns --version): registry.k8s.io/external-dns/external-dns:v0.13.6, Helm chart 1.13.1. - DNS provider: Google Cloud DNS
- Others:
Debug doesn't show any other useful information, apart from finding duplicate endpoints. Issue also exists in older versions (tested up to 1.12.0).
same behavior
I had this issue with a LoadBalancer service, but discovered that it was because I actually had two services which both had the external-dns.alpha.kubernetes.io/hostname set to the same value, and with the same externalIP (this was a carry-over from pre-1.26 where you couldn't share UDP / TCP on the same service).
I consolidated onto a single service, and the problem went away.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.