external-dns icon indicating copy to clipboard operation
external-dns copied to clipboard

Issue with adding multiple A records for the same DNS and multiple clusters

Open rnkhouse opened this issue 2 years ago • 10 comments

What happened: I have 2 AKS clusters with external-dns installed in both. It has istio + Azure Private DNS. The arguments I am passing as below:

    - --source=service
    - --source=ingress
    - --source=istio-gateway
    - --source=istio-virtualservice
    - --provider=azure-private-dns
    - --azure-resource-group=DOMAIN_RG_VALUE
    - --azure-subscription-id=AZURE_SUBSCRIPTION_ID
    - --log-level=debug
    - --registry=txt
    - --txt-owner-id=cluster-1

Second cluster is setup using --txt-owner-id=cluster-2

But, the issue is it's not adding 2 A records in DNS.

I checked the logs:

time="2022-05-10T21:22:14Z" level=debug msg="Skipping endpoint grafana.abc.com 0 IN A 10.2.10.2 [] because owner id does not match, found: "cluster-2", required: "cluster-1"" time="2022-05-10T21:22:14Z" level=debug msg="Skipping endpoint grafana.abc.com 300 IN A 10.1.10.2 [] because owner id does not match, found: "cluster-2", required: "cluster-1""

On Azure portal, I see only one IP address is being added (10.1.10.2). There should be one more with 10.2.10.2.

Screen Shot 2022-05-10 at 5 43 46 PM

What you expected to happen: It should add 2 A records as we have 2 different IP addresses in both clusters.

How to reproduce it (as minimally and precisely as possible): Install 2 AKS clusters with different private IP addresses and setup using istio and Azure Private DNS. It will only add one record.

Environment:

  • External-DNS version: v0.10.0
  • DNS provider: azure-private-dns
  • Others: istio

rnkhouse avatar May 10 '22 21:05 rnkhouse

What resource are you resolving DNS to, and print the yaml file.

# cluster 1
kubectl get svc or ingress -o wide 
kubectl get svc or ingress xxxx -o yaml 

# cluster 2
kubectl get svc or ingress -o wide 
kubectl get svc or ingress xxxx -o yaml 

lou-lan avatar May 11 '22 14:05 lou-lan

@lou-lan It's service located in istio-system namespace.

cluster 1: istio-internal-ingressgateway LoadBalancer 10.6.38.143 10.2.10.2 15021:32225/TCP,80:32098/TCP,443:30775/TCP 54d app=istio-internal-ingressgateway,istio=internal-ingressgateway

apiVersion: v1
kind: Service
metadata:
annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
    {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"labels":{"app":"istio-internal-ingressgateway","install.operator.istio.io/owning-resource":"unknown","istio":"internal-ingressgateway","istio.io/rev":"default","operator.istio.io/component":"IngressGateways","release":"istio"},"name":"istio-internal-ingressgateway","namespace":"istio-system"},"spec":{"loadBalancerIP":"10.2.10.2","ports":[{"name":"status-port","port":15021,"protocol":"TCP","targetPort":15021},{"name":"http2","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443}],"selector":{"app":"istio-internal-ingressgateway","istio":"internal-ingressgateway"},"type":"LoadBalancer"}}
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2022-03-17T22:55:14Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
    app: istio-internal-ingressgateway
    install.operator.istio.io/owning-resource: unknown
    istio: internal-ingressgateway
    istio.io/rev: default
    operator.istio.io/component: IngressGateways
    release: istio
name: istio-internal-ingressgateway
namespace: istio-system
resourceVersion: "7556007"
uid: 1340e1d4-cf47-446e-a257-6f8f929bf456
spec:
clusterIP: 10.6.38.143
clusterIPs:
- 10.6.38.143
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerIP: 10.2.10.2
ports:
- name: status-port
    nodePort: 32225
    port: 15021
    protocol: TCP
    targetPort: 15021
- name: http2
    nodePort: 32098
    port: 80
    protocol: TCP
    targetPort: 8080
- name: https
    nodePort: 30775
    port: 443
    protocol: TCP
    targetPort: 8443
selector:
    app: istio-internal-ingressgateway
    istio: internal-ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
    ingress:
    - ip: 10.2.10.2

cluster 2: istio-internal-ingressgateway LoadBalancer 10.8.12.130 10.1.10.2 15021:32225/TCP,80:32098/TCP,443:30775/TCP 54d app=istio-internal-ingressgateway,istio=internal-ingressgateway

apiVersion: v1
kind: Service
metadata:
annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
    {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"labels":{"app":"istio-internal-ingressgateway","install.operator.istio.io/owning-resource":"unknown","istio":"internal-ingressgateway","istio.io/rev":"default","operator.istio.io/component":"IngressGateways","release":"istio"},"name":"istio-internal-ingressgateway","namespace":"istio-system"},"spec":{"loadBalancerIP":"10.1.10.2","ports":[{"name":"status-port","port":15021,"protocol":"TCP","targetPort":15021},{"name":"http2","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443}],"selector":{"app":"istio-internal-ingressgateway","istio":"internal-ingressgateway"},"type":"LoadBalancer"}}
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2022-03-17T22:55:14Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
    app: istio-internal-ingressgateway
    install.operator.istio.io/owning-resource: unknown
    istio: internal-ingressgateway
    istio.io/rev: default
    operator.istio.io/component: IngressGateways
    release: istio
name: istio-internal-ingressgateway
namespace: istio-system
resourceVersion: "7556007"
uid: 1340e1d4-cf47-446e-a257-6f8f929bf456
spec:
clusterIP: 10.8.12.130
clusterIPs:
- 10.8.12.130
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerIP: 10.1.10.2
ports:
- name: status-port
    nodePort: 32225
    port: 15021
    protocol: TCP
    targetPort: 15021
- name: http2
    nodePort: 32098
    port: 80
    protocol: TCP
    targetPort: 8080
- name: https
    nodePort: 30775
    port: 443
    protocol: TCP
    targetPort: 8443
selector:
    app: istio-internal-ingressgateway
    istio: internal-ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
    ingress:
    - ip: 10.1.10.2

rnkhouse avatar May 11 '22 17:05 rnkhouse

It looks like you use the same domain name, can you use a different domain name?

lou-lan avatar May 11 '22 17:05 lou-lan

@lou-lan Actually, we need to point 2 clusters services with one domain. So, on azure DNS, there should be 2 A records for the same domain. I think my issue is similar to this: https://github.com/kubernetes-sigs/external-dns/issues/685

rnkhouse avatar May 11 '22 17:05 rnkhouse

Is RBAC used for these clusters? How was external-dns deployed, and what is the RBAC configuration?

darkn3rd avatar May 11 '22 18:05 darkn3rd

@darkn3rd Yes. RBAC is enabled. below is the config:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [""]
  resources: ["services","endpoints","pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"] 
  resources: ["ingresses"] 
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
- apiGroups: ["networking.istio.io"]
  resources: ["gateways","virtualservices"]
  verbs: ["get","watch","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: default

Deployment file:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: external-dns
  template:
    metadata:
      labels:
        app: external-dns
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: k8s.gcr.io/external-dns/external-dns:v0.10.0
        args:
        - --source=service
        - --source=ingress
        - --source=istio-gateway
        - --source=istio-virtualservice
        - --provider=azure-private-dns
        - --azure-resource-group=DOMAIN_RG_VALUE
        - --azure-subscription-id=AZURE_SUBSCRIPTION_ID
        - --log-level=debug
        - --registry=txt
        - --txt-owner-id=K8S_NAME
        volumeMounts:
        - name: azure-config-file
          mountPath: /etc/kubernetes
          readOnly: true
      volumes:
      - name: azure-config-file
        secret:
          secretName: azure-config-file

rnkhouse avatar May 11 '22 18:05 rnkhouse

@lou-lan @darkn3rd Can you please help here with what can be done to use the same DNS from multiple clusters?

rnkhouse avatar May 17 '22 14:05 rnkhouse

@lou-lan @darkn3rd Can you please help here with what can be done to use the same DNS from multiple clusters?

We are not currently resolving DNS to multiple clusters (use coredns).

lou-lan avatar May 17 '22 15:05 lou-lan

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 15 '22 15:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 14 '22 16:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 14 '22 17:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 14 '22 17:10 k8s-ci-robot