external-dns
external-dns copied to clipboard
External DNS does not update NodePort to CloudMap
Hello,
I used External DNS combined with CloudMap. During configuration, I used Service as NodePort instead of LoadBalancer as the instructions say, and I used [this instruction](https://kubernetes-sigs.github.io/external-dns/v0.14.0 /tutorials/aws-sd/) to install External DNS. During the first installation, I noticed that External DNS created the correct service and service instance in the cloud map, but during the testing process, I repeatedly deleted and reapplied nginx, so the service endpoint also changed. But External DNS does not change the IP address in Cloud Map. I created a new service and External DNS also created a new service and registered the same service instance, so I have 2 different services but the same IP address, and that IP address is of the service I first installed so it is not true for the two current services.
I found that deleting the worker node resolved the above issue, while reinstalling external-dns did not.
I configure external-dns as follows. External DNS
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxx:role/xxx-xxx-external-dns-role
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.0
env:
- name: AWS_REGION
value: ap-southeast-1 # put your CloudMap NameSpace region
args:
- --source=service
- --source=ingress
- --domain-filter=xxxx.local # Makes ExternalDNS see only the namespaces that match the specified domain. Omit the filter if you want to process all available namespaces.
- --provider=aws-sd
- --aws-zone-type=private # Only look at public namespaces. Valid values are public, private, or no value for both)
- --txt-owner-id=my-identifier
- --interval=20s
- --log-level=debug
- --registry=aws-sd
Nginx
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
external-dns.alpha.kubernetes.io/hostname: nginx.xxxx.local
spec:
type: NodePort
ports:
- port: 80
name: http
targetPort: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: http
Logs
time="2024-01-08T14:49:47Z" level=debug msg="Endpoints generated from service: xxxx-system/kong-test: [_kong-test._tcp.kong-test.xxxx.local 20 IN SRV 0 50 30813 kong-test.xxxx.local [] kong-test.xxxx.local 20 IN A 10.0.220.121 []]"
time="2024-01-08T14:49:47Z" level=debug msg="Endpoints generated from service: xxxx-system/nginx: [_nginx._tcp.nginx.xxxx.local 0 IN SRV 0 50 32286 nginx.xxxx.local [] nginx.xxxx.local 0 IN A 10.0.220.121 []]"
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Hello,
I also observed same behavior. Can someone please look into it?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.