external-dns
external-dns copied to clipboard
External DNS Continuously Removes and Adds DNS Entries at Each Interval
What happened: At each interval, External DNS removes and adds DNS entries.
time="2024-07-05T07:41:58Z" level=info msg="Removing RR: my-app.srv01.foobar.example.com 300 CNAME prod.srv01.foobar.example.com"
time="2024-07-05T07:41:58Z" level=info msg="Adding RR: my-app.srv01.foobar.example.com 300 CNAME prod.srv01.foobar.example.com"
time="2024-07-05T07:41:58Z" level=info msg="Removing RR: my-app.srv01.foobar.example.com 0 TXT \"heritage=external-dns,external-dns/owner=eda-services01,external-dns/resource=ingress/my-app/my-app-ingress\""
time="2024-07-05T07:41:58Z" level=info msg="Adding RR: my-app.srv01.foobar.example.com 300 TXT \"heritage=external-dns,external-dns/owner=eda-services01,external-dns/resource=ingress/my-app/my-app-ingress\""
time="2024-07-05T07:41:58Z" level=info msg="Removing RR: cname-my-app.srv01.foobar.example.com 0 TXT \"heritage=external-dns,external-dns/owner=eda-services01,external-dns/resource=ingress/my-app/my-app-ingress\""
time="2024-07-05T07:41:58Z" level=info msg="Adding RR: cname-my-app.srv01.foobar.example.com 300 TXT \"heritage=external-dns,external-dns/owner=eda-services01,external-dns/resource=ingress/my-app/my-app-ingress\""
time="2024-07-05T07:42:59Z" level=info msg="Removing RR: my-app.srv01.foobar.example.com 300 CNAME prod.srv01.foobar.example.com"
time="2024-07-05T07:42:59Z" level=info msg="Adding RR: my-app.srv01.foobar.example.com 300 CNAME prod.srv01.foobar.example.com"
time="2024-07-05T07:42:59Z" level=info msg="Removing RR: my-app.srv01.foobar.example.com 0 TXT \"heritage=external-dns,external-dns/owner=eda-services01,external-dns/resource=ingress/my-app/my-app-ingress\""
time="2024-07-05T07:42:59Z" level=info msg="Adding RR: my-app.srv01.foobar.example.com 300 TXT \"heritage=external-dns,external-dns/owner=eda-services01,external-dns/resource=ingress/my-app/my-app-ingress\""
time="2024-07-05T07:42:59Z" level=info msg="Removing RR: cname-my-app.srv01.foobar.example.com 0 TXT \"heritage=external-dns,external-dns/owner=eda-services01,external-dns/resource=ingress/my-app/my-app-ingress\""
time="2024-07-05T07:42:59Z" level=info msg="Adding RR: cname-my-app.srv01.foobar.example.com 300 TXT \"heritage=external-dns,external-dns/owner=eda-services01,external-dns/resource=ingress/my-app/my-app-ingress\""
What you expected to happen: External DNS should report that everything is up-to-date and not make any changes.
How to reproduce it (as minimally and precisely as possible):
- Create a LoadBalancer with the annotation: external-dns.alpha.kubernetes.io/hostname: prod.srv01.foobar.example.com
- Create an Ingress with the annotation: external-dns.alpha.kubernetes.io/target: prod.srv01.foobar.example.com
- Observe External DNS logs to see the reported behavior.
Anything else we need to know?:
Environment:
- External-DNS version (use
external-dns --version): v0.14.2 - DNS provider: RFC-2136
- Others: Installed on Kubernetes v1.27.13+rke2r1 via Bitnami Helm chart v8.0.1, using the following values.yaml:
domainFilters: [ "srv01.foobar.example.com" ]
policy: sync
provider: rfc2136
txtOwnerId: CLUSTER_NAME
rfc2136:
host: dns.example.com
port: 53
zone: foobar.example.com
minTTL: 300s
tsig: # TSIG values omitted for security reasons
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.