external-dns
external-dns copied to clipboard
external-dns crashes when presented with wildcard using pihole
What happened: external-dns pod has reached CrashLoopBackOff
error logs indicate a recently deployed ingress is the source:
time="2024-07-18T09:52:33Z" level=info msg="Created Kubernetes client https://10.96.0.1:443"
time="2024-07-18T09:52:33Z" level=info msg="add *.minio.xxx.yyy IN CNAME -> k8s.xxx.yyy"
time="2024-07-18T09:52:33Z" level=fatal msg="Failed to do run once: Domain '*.minio.xxx.yyy' is not valid"
Using external-dns:v0.14.2
This cluster has two external-dns deployments, one to cloudflare and one to pihole. Records are persisted in cloudflare without issue. This is a pihole-specific issue.
What you expected to happen: external-dns should not attempt unsupported actions with the provider. It should not reach CrashLoopBackOff in this case.
How to reproduce it (as minimally and precisely as possible): Run external-dns with pihole configuration, eg:
Args:
--source=ingress
--provider=pihole
--registry=noop
--pihole-server=http://pihole.server
--policy=upsert-only
Environment:
EXTERNAL_DNS_PIHOLE_PASSWORD: password
Apply an ingress w/ wildcard host, eg
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wildcard
annotations:
external-dns.alpha.kubernetes.io/target: "k8s.xxx.yyy"
spec:
ingressClassName: nginx
rules:
- host: *.minio.xxx.yyy
http:
paths:
- backend:
service:
name: minio
port:
number: 9090
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- *.minio.xxx.yyy
secretName: minio.xxx.yyy-tls
Anything else we need to know?:
Environment:
- External-DNS version (use
external-dns --version): v0.14.2 - DNS provider: pihole
- Others:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Fixed with #4904 /close
@mloiseleur: Closing this issue.
In response to this:
Fixed with #4904 /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.