External DNS reading the records which it doesn't manage/ have not created in Cloudlfare
What happened:
Updated from version 0.16.1 to 0.18.0.
There are CNAME records in Cloudflare that has underscore so we are seeing the warning and we are aware that there was an issue reported https://github.com/kubernetes-sigs/external-dns/issues/5581 and fix will come out in coming release. But the real issue is version 0.18.0 is reporting this for CNAME Records which it doesn't manage or doesn't have a TXT record for them. These CNAME records were manaually created in Cloudflare.
time="2025-08-06T20:10:27Z" level=warning msg="Got error while parsing domain s2._domainkey.XXXXX: idna: disallowed rune U+005F"
What you expected to happen:
No log warnings for these CNAME records which are NOT manged but External-DNS.
How to reproduce it (as minimally and precisely as possible):
Create a CNAME record manually in cloudflare with underscore in a zone. This is not managed by external-dns , i.e. its not created by external DNS.
CLI arguments:
--source=ingress --domain-filter=something.com --provider=cloudflare --policy=sync --registry=txt --txt-owner-id=external-dns
Anything else we need to know?:
Environment:
External-DNS version (use external-dns --version): 0.18.0 DNS provider: Cloudflare Others: Kubernetes version 1.31.10
ExternalDNS needs to know which records to manage by reading and filtering all of them—there's no other way. The warning you're seeing is a known issue that will be fixed in the next release.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten