external-dns icon indicating copy to clipboard operation
external-dns copied to clipboard

Support for adding multiple IPs to a single A Record

Open Thakurvaibhav opened this issue 6 months ago • 8 comments
trafficstars

What happened: We are migrating from nginx-ingress to envoy-gateway. We use gcloud DNS for DNS services. external-dns is used to create A records for ingress hosts and the IP of the loadbalancer for nginx-ingress.

When we create a HTTPRoute for the same host as ingress, external DNS would not add the IP address for the Gateway to the existing record unless the ingress resource is deleted

What you expected to happen: external-dns should add IP for both ingress and httproute to the same A record if it exists for the same host

How to reproduce it (as minimally and precisely as possible):

  • Create a microservice and expose it via ingress
  • Observe external-dns creating DNS entry
  • Intall envoy-gateway and create a HTTPRoute for this same service
  • Observe external-dns do nothing
  • Delete ingress resource, external DNS should now delete the previous DNS entry and create a new one

Anything else we need to know?: The ability to have 2 IPs in the same A records is very beneficial when slowly moving traffic to a different loadbalancer.

Relevant Slack Thread: https://kubernetes.slack.com/archives/C771MKDKQ/p1747285954976279

Environment: GKE 1.31

  • External-DNS version (use external-dns --version): v20241219-v0.15.1
  • DNS provider: gcloud
  • Others:
    • envoy-gateway version 1.3.1
    • external-dns pod has the following arguments
      • --policy=upsert-only
      • --source=service
      • --source=ingress
      • --source=gateway-httproute
      • --source=gateway-grpcroute

cc: @szuecs

Thakurvaibhav avatar May 18 '25 16:05 Thakurvaibhav

@Thakurvaibhav just realized that you are using GCP. This feature seems to be under discussion in the PR. I'm not sure whether someone is actively working on that tough, will check with @szuecs and try to help.

jhonis avatar May 19 '25 13:05 jhonis

Thanks for linking that, @jhonis ! 🙌

GCP routing policies are definitely powerful, and in some cases, they might help address this issue. However, what I’m really looking for is a way for ExternalDNS to append to an existing A record—specifically, to allow multiple IPs in the same A record for a given domain.

From what I can tell, ExternalDNS currently doesn’t support this behavior, which makes gradual migration between load balancers (like from nginx to envoy-gateway) a bit tricky.

Thakurvaibhav avatar May 19 '25 16:05 Thakurvaibhav

Got your point!

However, even though the PR is about weighted or geolocation routes, you could use it to achieve the same behavior you would get with multiple IPs in the same record. AWS even has a multi-value option which, in my opinion, was conceived to do exactly this without weighting.

The way you want to do, is unlikely to work and it's kind of against the external-dns rules because each record has its owner, and if you have two owners for the same...

  • dog: it will die hunger :D
  • domain: the two owners will compete for the record ownership

jhonis avatar May 19 '25 17:05 jhonis

I see, I was also worried about record ownership. So, is the only way forward ( assuming that the linked PR is implemented and released ) to first migrate to a routing policy based record and then add another IP to it?

Thakurvaibhav avatar May 19 '25 21:05 Thakurvaibhav

I would say yes. Unless if @szuecs has some other idea.

As you mentioned this is a migration, meaning temporary state, the other way is to delete the TXT records to take the ownership from external-dns and add the new IPs manually. When you finish the migration, you delete the record and let external-dns recreate it. Not the "perfect" way, but it should work.

jhonis avatar May 20 '25 13:05 jhonis

DNS allows to set multiple records with the same name and type (not CNAME) to different targets. So for all A,AAAA,TXT, .. without CNAME, I don't see a reason that if the running external-dns owns the name, type of the record that it would not set more than one record per type. So yes I would say it's perfectly fine to have:

  • 1 ingress with a status of multiple destinations
  • 1 ingress with one or more destinations in status

  • 1 source (e.g. ingress, service, ...) that has different destinations

As long the external-dns instance owns the ( name, type) tuple it is fine, but if it does not own it should not touch it.

I hope it's clear what I mean.

szuecs avatar May 20 '25 14:05 szuecs

he ownership from external-dns and add the new IPs manually.

Yes, but this won't work for 300+ GKE and AKS cluster with each having multiple endpoints.

Thakurvaibhav avatar May 20 '25 15:05 Thakurvaibhav

DNS allows to set multiple records with the same name and type (not CNAME) to different targets. So for all A,AAAA,TXT, .. without CNAME, I don't see a reason that if the running external-dns owns the name, type of the record that it would not set more than one record per type. So yes I would say it's perfectly fine to have:

  • 1 ingress with a status of multiple destinations
  • 1 ingress with one or more destinations in status

  • 1 source (e.g. ingress, service, ...) that has different destinations

As long the external-dns instance owns the ( name, type) tuple it is fine, but if it does not own it should not touch it.

I hope it's clear what I mean.

Thank you for chiming in @szuecs In my case, external-dns instance owns the record but it is still not working. Could this be because the ownership TXT data includes ingress name in it ?

Thakurvaibhav avatar May 20 '25 15:05 Thakurvaibhav

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 18 '25 16:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 17 '25 16:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 17 '25 16:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Oct 17 '25 16:10 k8s-ci-robot