external-dns
external-dns copied to clipboard
Node source ignores multiple FQDN templates
What happened:
Added multiple FQDN templates while using the node source. A record/endpoint is only generated for the first template.
What you expected to happen: Each template is evaluated independently and multiple records are created (each having all IPs of matching nodes).
How to reproduce it (as minimally and precisely as possible):
Since this issue only affects the node source, the used sink does not matter. For demonstration purposes, I have used rfc2316 with a simple bind deployment.
- Apply the manifest below (collapsible section) to any Kubernetes cluster.
- Wait for
external-dnsandbindto start - Run a
kubectl logs <external-dns-pod> | grep "IN A". => Only lists records withz1.k8s.example.org=> Second zone from FQDN template (see manifest) is ignored
Kubernetes Manifests
# --------- external-dns related resources ---------
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
namespace: default
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: default
spec:
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.0
args:
# Source specific flags below (related to bug)
- --source=node
- --fqdn-template={{.Name}}.z1.k8s.example.org,{{.Name}}.z2.k8s.example.org
- --log-level=debug
# Sink specific flags below (not related to bug)
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=k8s
- --provider=rfc2136
- --rfc2136-host=bind
- --rfc2136-port=53
- --rfc2136-zone=k8s.example.org
- --rfc2136-tsig-secret=kkrD9vIUaPXg7Av9IwXoSXQgR1kItuJFKjaCkh5imf4=
- --rfc2136-tsig-secret-alg=hmac-sha256
- --rfc2136-tsig-keyname=externaldns-key
- --rfc2136-tsig-axfr
# --------- BIND DNS server as a "dummy"/testing target ---------
---
apiVersion: v1
kind: ConfigMap
metadata:
name: bind
data:
named.conf: |
options {
directory "/var/cache/bind";
dnssec-validation auto;
listen-on-v6 { any; };
};
key "externaldns-key" {
algorithm hmac-sha256;
secret "kkrD9vIUaPXg7Av9IwXoSXQgR1kItuJFKjaCkh5imf4=";
};
zone "k8s.example.org" {
type master;
file "/etc/bind/zone/k8s.zone";
allow-transfer {
key "externaldns-key";
};
update-policy {
grant externaldns-key zonesub ANY;
};
};
k8s.zone: |
$TTL 60
$ORIGIN k8s.example.org.
@ IN SOA ns1.k8s.example.org. hostmaster.k8s.example.org. (
2003080800 ; serial number
60 ; refresh
60 ; update retry
60 ; expiry
60 ; minimum
)
IN NS ns1.k8s.example.org.
ns1 IN A 192.168.254.2
---
apiVersion: v1
kind: Service
metadata:
name: bind
namespace: default
labels:
app: bind
spec:
ports:
- protocol: TCP
port: 53
targetPort: 53
name: dns-tcp
- protocol: UDP
port: 53
targetPort: 53
name: dns
selector:
app: bind
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: bind
name: bind
namespace: default
spec:
replicas: 1
serviceName: "bind"
selector:
matchLabels:
app: bind
template:
metadata:
labels:
app: bind
spec:
volumes:
- name: config
configMap:
name: bind
- name: zones
emptyDir: {}
initContainers:
- name: init-zones
image: busybox:1.28
command: ['sh', '-c', "false | cp -i /etc/bind/*.zone /etc/bind/zone/"]
volumeMounts:
- name: config
mountPath: "/etc/bind"
readOnly: true
- name: zones
mountPath: "/etc/bind/zone"
containers:
- image: ubuntu/bind9:9.18-22.04_beta
name: bind
resources:
limits:
memory: "100Mi"
cpu: "0.25"
ports:
- containerPort: 53
protocol: TCP
- containerPort: 53
protocol: UDP
env:
- name: TZ
value: "UTC"
volumeMounts:
- name: config
mountPath: "/etc/bind"
readOnly: true
- name: zones
mountPath: "/etc/bind/zone"
terminationGracePeriodSeconds: 30
Anything else we need to know?:
The root cause is the Endpoints function implementation of the node provider which only loops over the nodes and their addresses but not the FQDN templates. Instead, it always takes the first element of the template array and ignores any additional entries.
I would be happy to submit a Pull Request to address this issue by looping over the FQDN template array. Let me know if this is something you would be open to merge upstream!
Environment:
- External-DNS version:
v0.14.0 - DNS provider:
RFC2316(bug only relates to source but this was used while discovering it and in the reproduction example)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.