external-dns
external-dns copied to clipboard
Support additional routing alias Route53 records
What would you like to be added:
An annotation called external-dns.alpha.kubernetes.io/routing-alias (or something like that) which lets you set an additional alias record that points to the record created in external-dns.alpha.kubernetes.io/hostname.
An example of a service with a latency-based rrset, where the "main" URL is myservice.mydomain.com but you would be able to reach a particular deployment of it at myservice-<region>.mydomain.com:
external-dns.alpha.kubernetes.io/hostname: myservice.mydomain.com
external-dns.alpha.kubernetes.io/routing-alias: myservice-<region>.mydomain.com
external-dns.alpha.kubernetes.io/set-identifier: myservice-<region>
external-dns.alpha.kubernetes.io/aws-region: <region>
Why is this needed: For many multi-region services at my company, we like to have region-specific records to use for testing specific regions' deployments, but our clients talk to a "main" URL that uses latency-based routing to redirect to the closest region. Of course, we could just use the auto-generated DNS names for the ELBs, but that requires looking up the names with kubectl. So right now we have external-dns create a simple alias record for the ELB and then we create the rrset out of band. Adding this feature would mean we would not have to create the rrset out of band.
powerdns has ALIAS records as well. would love support for this feature. critical for anyone hosting mail.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
this bot is so annoying
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@2rs2ts : I am trying to test a similar setup for my apps in AWS east and west, through our current EKS module(ingress and external DNS) setup we are creating region-specific route 53 records pointing to the east and west alb. When I am trying to set up latency based on routing (for testing now but we want other routing policies working as well) when I am passing the below annotations in the ingress as below.
external-dns.alpha.kubernetes.io/hostname: myservice.mydomain.com
external-dns.alpha.kubernetes.io/routing-alias: myservice-<region>.mydomain.com
external-dns.alpha.kubernetes.io/set-identifier: myservice-<region>
external-dns.alpha.kubernetes.io/aws-region: <region>
It keeps on failing with the error message below.
level":"error","msg":"InvalidChangeBatch: [RRSet with DNS name myservice-use1.aws.dshrp.com., type A, SetIdentifier myservice-use1, and Region Name=us-east-1 cannot be created because a non-latency RRSet with the same name and type already exists., RRSet with DNS name myservice-use1.aws.dshrp.com.
Just to make it work I manually tried to delete the route 53 record set myservice-use1.aws.dshrp.com( A record and TXT) record but It is not creating the deeleted route-53 records. When I clean up all the above annotations everything is backup and running, but I am not able to make it work with any kind of routing policies.
@sushantkumar12 I'm confused by your comment. To be clear, this github issue is a feature request–the thing you tried doesn't exist yet. Please upvote the issue, and hopefully it will get prioritized sooner!
@2rs2ts Yes, I realized later, I somehow thought looking through multiple posts/issues that it worked for you. From AWS documentation this seems like the feature is available. https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#routing-policies
I'm still not following you there. The docs you linked are external-dns' docs, and they still do not include my feature request here. Is it possible you are just talking about the set-identifier? That feature works well, but for me, it is not enough for a complete solution.
No, I am not talking about set-identifier, I am talking about Routing policies the same as you, I have a similar use case as yours, where I want to route top-level DNS records between east and west DNS records based on different Routing policies mentioned in above doc.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/triage priority/backlog
@szuecs: The label(s) triage/priority/backlog cannot be applied, because the repository doesn't have them.
In response to this:
/triage priority/backlog
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale