external-dns
external-dns copied to clipboard
Traefik IngressRoutes do not create DNS records without annotation
What happened:
Having set up external-dns to use traefik-proxy
, it did not create dns records with IngressRoute. It would only create the record with the external-dns.alpha.kubernetes.io/target
annotation. This however failed if the target was another domain.
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: hops
annotations:
external-dns.alpha.kubernetes.io/target: traefik.example.com # nothing happens if this is missing
cert-manager.io/cluster-issuer: letsencrypt
spec:
entryPoints:
- foo
routes:
- kind: Rule
match: Host(`app.example.com`)
services:
- kind: Service
passHostHeader: true
scheme: https
name: hops
port: 9000
tls:
domains:
- main: app.example.com
secretName: app-tls
When setting external-dns.alpha.kubernetes.io/target: traefik.example.com
the following error appears in external-dns's log.
{"level":"info","msg":"Add records: cname-app.example.com. TXT [\"heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=ingressroute/default/app\"] 300","time":"2023-09-30T16:51:33Z"}
{"level":"info","msg":"Add records: app.example.com. CNAME [traefik.example.com.] 300","time":"2023-09-30T16:51:33Z"}
{"level":"info","msg":"Add records: app.example.com. TXT [\"heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=ingressroute/default/app\"] 300","time":"2023-09-30T16:51:33Z"}
{"level":"fatal","msg":"googleapi: Error 400: The resource record set 'entity.change.additions[app.example.com.][CNAME]' is invalid because the DNS name 'app.example.com.' has a resource record set of the type 'TXT'. A DNS name may have either one CNAME resource record set or resource record sets of other types, but not both.\nMore details:\nReason: cnameResourceRecordSetConflict, Message: The resource record set 'entity.change.additions[app.example.com.][CNAME]' is invalid because the DNS name 'app.example.com.' has a resource record set of the type 'TXT'. A DNS name may have either one CNAME resource record set or resource record sets of other types, but not both.\nReason: cnameResourceRecordSetConflict, Message: The resource record set 'entity.change.additions[app.example.com.][TXT]' is invalid because the DNS name 'app.example.com.' has a resource record set of the type 'TXT'. A DNS name may have either one CNAME resource record set or resource record sets of other types, but not both.\n","time":"2023-09-30T16:51:33Z"}
What you expected to happen:
The domain app.example.com
would have been created and correctly connected.
How to reproduce it (as minimally and precisely as possible):
Other than traefik and external-dns being set up, this is all you need.
Anything else we need to know?:
Environment:
- External-DNS version (use
external-dns --version
): v0.13.6 - DNS provider: letsencrypt
- Others: traefik v2.10.4
The main issue here is not that the DNS record is not created without the needed annotation, the issue is that it doesn't create the DNS records (in my case CNAME in Cloudflare) with all needed annotations described into the doc guide, it only create txt records, so I reverted back to the DNS Endpoint object
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
This issue may be related: https://github.com/kubernetes-sigs/external-dns/issues/740 and this: https://github.com/kubernetes-sigs/external-dns/issues/1416
It looks like external-dns should be configured with --txt-prefix
similar to this:
- name: external-dns
image: "asia.gcr.io/k8s-artifacts-prod/external-dns/external-dns:v0.7.3"
args:
- --log-level=info
- --log-format=text
- --policy=upsert-only
- --provider=$(PROVIDER)
- --txt-owner-id=$(TXT_OWNER_ID)
- --txt-prefix=external-dns.
- --registry=txt
- --interval=1m
- --source=service
- --source=ingress
- --aws-batch-change-size=1000
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@james-callahan: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.