external-dns icon indicating copy to clipboard operation
external-dns copied to clipboard

external DNS does not update DNS entries for multiple domains/hosts using single load balancer

Open keyur-saloodo opened this issue 1 year ago • 2 comments

What happened: While creating ingress with multiple domains external DNS does not update anything nor it generate any log

What you expected to happen: external DNS should update alias record in route53 for multiple domains under single load balancer

How to reproduce it (as minimally and precisely as possible):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-central-1:123456789:certificate/ID
    alb.ingress.kubernetes.io/group.name: ingress1
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
    alb.ingress.kubernetes.io/ssl-redirect: "443"
    alb.ingress.kubernetes.io/subnets: private-subnets
    alb.ingress.kubernetes.io/target-type: ip
    external-dns.alpha.kubernetes.io/alias: "true"
    external-dns.alpha.kubernetes.io/hostname: private.domain.com
    kubernetes.io/ingress.class: alb
spec:
  ingressClassName: alb
  rules:
    - host: private.domain.com
      http:
        paths:
          - path: /*
            pathType: ImplementationSpecific
            backend:
              service:
                name: private-app
                port:
                  number: 80

Only logs i can see in external-dns pod is

time="2024-04-22T17:42:26Z" level=info msg="Instantiating new Kubernetes client"
time="2024-04-22T17:42:26Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2024-04-22T17:42:26Z" level=info msg="Created Kubernetes client https://172.20.0.1:443"
time="2024-04-22T17:42:27Z" level=info msg="Applying provider record filter for domains: [private.domain.com. .private.domain.com.]"
time="2024-04-22T17:42:27Z" level=info msg="All records are already up to date"

Anything else we need to know?: Previously I was trying it with a single ingress file with multiple hosts like this but no luck so i have to create seperate file for each ingress and try it out but no luck as well

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/hostname: first-private.domain.com,second-private.domain.com
    external-dns.alpha.kubernetes.io/alias: "true"
spec:
  ingressClassName: alb
  rules:
    - host: first-private.domain.com
    - host: second-private.domain.com

Environment:

  • External-DNS version (use external-dns --version): v0.14.1
  • DNS provider: route53
  • Others: Single load balancer with multiple hostnames/rules/domains

keyur-saloodo avatar Apr 22 '24 18:04 keyur-saloodo

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 21 '24 18:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 20 '24 19:08 k8s-triage-robot

Any update on this issue. I'm also facing the same issue with my deployment.

singhkrshivam avatar Sep 13 '24 11:09 singhkrshivam

@singhkrshivam To work around I created multiple value files for each domain and added alb.ingress.kubernetes.io/group.name annotation in ingress

keyur-saloodo avatar Sep 13 '24 11:09 keyur-saloodo

@keyur-saloodo thanks for quick response. So you created multiple ingress resources with annotations for hostname and added the alb.ingress.kubernetes.io/group.name: same-group-name to all the objects. Please correct me if i'm wrong.

external-dns.alpha.kubernetes.io/ingress-hostname-source: defined-hosts-only
external-dns-controller: external-dns-public

singhkrshivam avatar Sep 13 '24 11:09 singhkrshivam

yes correct, defined-hosts-only values you can get it from values.yaml

keyur-saloodo avatar Sep 13 '24 11:09 keyur-saloodo