external-dns
external-dns copied to clipboard
Multiple issues with TXT registry (duplicate records, no template expansion)
What happened:
I am trying to implement Route 53 single hosted zone update with TXT registry, the version is 0.13.2 installed via Helm. The intent is to us annotations only and update a single hosted zone from several external-dns deployment. CNAME records are also enforced. Here are the flags:
Args:
--log-level=info
--log-format=json
--interval=5m
--events
--source=ingress
--policy=sync
--registry=txt
--txt-owner-id=this-works-just-fine
--txt-suffix=-%{record_type}
--provider=aws
--aws-zones-cache-duration=1h
--zone-id-filter=<redacted>
--aws-zone-type=public
--aws-prefer-cname
--ignore-ingress-rules-spec
--ignore-ingress-tls-spec
- I am getting 2x ownership TXT records created for each CNAME record
-
%{record_type}
is not expanded tocname
{"level":"info","msg":"Desired change: CREATE cname-pluto-.subdomain.test.com TXT [Id: /hostedzone/<redacted>]","time":"2023-03-28T14:44:34Z"}
{"level":"info","msg":"Desired change: CREATE pluto-.subdomain.test.com TXT [Id: /hostedzone/<redacted>]","time":"2023-03-28T14:44:34Z"}
{"level":"info","msg":"Desired change: CREATE pluto.subdomain.test.com CNAME [Id: /hostedzone/<redacted>]","time":"2023-03-28T14:44:34Z"}
When I do not use the template, but a simple string, it still creates 3x records with correct suffix.
The same behavior is observed for --txt-prefix
flag.
What you expected to happen:
For each CNAME record I expect to get a single ownership TXT record, suffixed with -cname
.
How to reproduce it (as minimally and precisely as possible):
AWS provider with the flags above, ingress with external-dns.alpha.kubernetes.io/hostname: pluto.subdomain.test.com
annotation (I have used my real domain).
Anything else we need to know?:
Environment:
- External-DNS version (use
external-dns --version
):
Yes, another bug:
~ $ external-dns --version
~ $
- DNS provider: Route 53
- Others:
@ilja-ag , I think two TXT records are expected as per this documentation.
It seems external-dns is going to switch to using <record_type>-<endpoint_name>
format for TXT record.
Hence it is creating two records of TXT type, one with the given --txt-suffix
and another with <record_type>-<endpoint_name>
.
Document also claims that it (controller) will auto clean the old TXT record.
ok, I see the intent here, but when I use CNAME records, CNAME and TXT records can not be named the same and default behavior is not to create either cname-NAME
or NAME-cname
as I would ideally like to avoid double TXT records, but requires one to use either prefix or suffix and with the new registry implementation AND CNAME records, one would end up with too many, too long records, i.e.
Would like to have (for sorting reasons):
- mars.external-dns.com (CNAME)
- mars-cname.external-dns.com (TXT)
Currently forced to specify either prefix (say, cname):
- mars.external-dns.com (CNAME)
- cname-mars.external-dns.com (TXT)
- cname-cname-mars.external-dns.com (TXT)
Or a suffix equivalent:
- mars.external-dns.com (CNAME)
- mars-cname.external-dns.com (TXT)
- cname-mars-cname.external-dns.com (TXT)
It is certainly not critical and decent results can be currently achieved by configuring a short suffix, but for this particular use case
- Is not flexible (forced on the end-user)
- Produces sub-optimal names (pretty ugly generated names)
- Unnecessarily increases the record count (wasteful)
Would it be possible to enable compatibility of the new registry format with the CNAME record type in the absence of any prefix/suffix specification? Ideally let end-user to choose either prefix or suffix for the new registry (these pesky sorting reasons, again). Say, useNewRegistryOnly=true
and newRegistryPosition=prefix/suffix
or similar.
It would also help a lot if the binary --help
output is corrected for both --version
and txt-registry related flags and all references to %(record_type}
template are removed.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.