external-dns
external-dns copied to clipboard
Cloudflare: external-dns takes over existing DNS record
What happened:
external-dns
takes over existing DNS record by creating related TXT ownership record and updates it even if the DNS record already existed and is managed outside of external-dns
.
What you expected to happen:
external-dns
ignores the existing DNS record.
How to reproduce it (as minimally and precisely as possible):
I can't reproduce it by creating a new record, but as you see from the Pod logs, it happened.
The related Ingress
object and DNS record was created on Oct 20 and external-dns
didn't apply any changes to the DNS record until Nov 1.
Anything else we need to know?:
time="2021-11-01T00:02:26Z" level=info msg="Changing record." action=CREATE record=domain.example.com ttl=1 type=CNAME zone=reducted_zone
time="2021-11-01T00:02:27Z" level=error msg="failed to create record: error from makeRequest: HTTP status 400: content \"{\\\"result\\\":null,\\\"success\\\":false,\\\"errors\\\":[{\\\"code\\\":81053,\\\"message\\\":\\\"An A, AAAA, or CNAME record with that host already exists.\\\"}],\\\"messages\\\":[]}\"" action=CREATE record=domain.example.com ttl=1 type=CNAME zone=reducted_zone
time="2021-11-01T00:02:27Z" level=info msg="Changing record." action=CREATE record=domain.example.com ttl=1 type=TXT zone=reducted_zone
time="2021-11-01T00:05:45Z" level=info msg="Changing record." action=UPDATE record=domain.example.com ttl=1 type=CNAME zone=reducted_zone
time="2021-11-01T00:05:47Z" level=info msg="Changing record." action=UPDATE record=domain.example.com ttl=1 type=TXT zone=reducted_zone
Environment:
- External-DNS version (use
external-dns --version
): 0.7.6-debian-10-r25 - DNS provider: Cloudflare
- Others: Bitnami Helm Chart 4.8.4
our values.yaml
used for the Helm Chart:
sources:
- service
- ingress
provider: cloudflare
cloudflare:
secretName: reducted
email: "reducted"
proxied: false
domainFilters:
- reducted.
- dev.reducted.
interval: "5m"
logLevel: info
policy: upsert-only
registry: "txt"
txtOwnerId: "reducted"
replicas: 1
resources:
limits:
cpu: 150m
memory: 150Mi
requests:
memory: 50Mi
cpu: 10m
It happened again with another DNS record.
time="2021-11-02T20:02:39Z" level=info msg="Changing record." action=CREATE record=domain2.example.com ttl=1 type=CNAME zone=reducted_zone
time="2021-11-02T20:02:40Z" level=error msg="failed to create record: error from makeRequest: HTTP status 400: content \"{\\\"result\\\":null,\\\"success\\\":false,\\\"errors\\\":[{\\\"code\\\":81053,\\\"message\\\":\\\"An A, AAAA, or CNAME record with that host already exists.\\\"}],\\\"messages\\\":[]}\"" action=CREATE record=domain2.example.com ttl=1 type=CNAME zone=reducted_zone
time="2021-11-02T20:02:40Z" level=info msg="Changing record." action=CREATE record=domain2.example.com ttl=1 type=TXT zone=reducted_zone
time="2021-11-02T20:05:11Z" level=info msg="Changing record." action=UPDATE record=domain2.example.com ttl=1 type=CNAME zone=reducted_zone
time="2021-11-02T20:05:12Z" level=info msg="Changing record." action=UPDATE record=domain2.example.com ttl=1 type=TXT zone=reducted_zone
Maybe not to create the TXT record if the previous A/AAAA/CNAME record create fails?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
any updates on this fatal bug with cloudflare? Still valid with version 0.10.2
Same here. The logs a bit different tho.
{"action":"CREATE","level":"error","msg":"failed to create record: error from makeRequest: HTTP status 400: content \"{\\\"result\\\":null,\\\"success\\\":false,\\\"errors\\\":[{\\\"code\\\":81053,\\\"message\\\":\\\"An A, AAAA, or CNAME record with that host already exists.\\\"}],\\\"messages\\\":[]}\"","record":
{"action":"CREATE","level":"info","msg":"Changing record.","record":"
{"action":"DELETE","level":"info","msg":"Changing record.","record":"
{"action":"CREATE","level":"error","msg":"failed to create record: error from makeRequest: HTTP status 400: content \"{\\\"result\\\":null,\\\"success\\\":false,\\\"errors\\\":[{\\\"code\\\":81053,\\\"message\\\":\\\"An A, AAAA, or CNAME record with that host already exists.\\\"}],\\\"messages\\\":[]}\"","record":"
{"action":"CREATE","level":"info","msg":"Changing record.","record":"
{"action":"CREATE","level":"info","msg":"Changing record.","record":"
So, in my case, it also shows DELETE
and the record it changed was a A
record and was changed to CNAME
:+1:
Any update?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I have opened new issue, as it seems it still the case in external-dns version v0.13.5 https://github.com/kubernetes-sigs/external-dns/issues/3706