external-dns
external-dns copied to clipboard
Option to ignore geolocation annotations
What would you like to be added:
We need a way to make an external dns instance ignore some/all annotations on ingresses/services. In this way multiple instances of external dns can be configured to act differently based on the same set of annotations.
For our use case a flag set to the external dns instance that would ignore the following annotations and always create simple records would suffice: external-dns.alpha.kubernetes.io/aws-geolocation-continent-code external-dns.alpha.kubernetes.io/aws-geolocation-country-code external-dns.alpha.kubernetes.io/set-identifier
However, a more structural approach would be to add the external dns instance id to the annotations themselves and make each instance read its own annotations. E.g.: external-dns.global-ID.alpha.kubernetes.io/aws-geolocation-continent-code external-dns.regional-ID.alpha.kubernetes.io/aws-geolocation-continent-code
This will work in conjunction with domain filters.
Why is this needed:
In our setup all ingresses have both a global and a regional record. The global records should have geolocated records that point to the nearest cluster while the regional ones should always point to a specific cluster, e.g.:
- awx.eks-example.com → should point to the primary cluster in Frankfurt if the user is in Europe and to the secondary one in Singapore if the user is from Asia
- awx.eu-central-1.eks-example.com → should always point to the cluster in Frankufurt indipendently from the current user position
If the solutions proposed look viable, we could also work on a PR. Thanks
For our setup we like to have 4 different urls available:
- public global -> www.product.com -> goes to regions closest eks cluster public ingress
- public regional -> www.eu-west-2.product.com -> goes to regions eks cluster public ingress
- private global -> www.product.lan -> goes to regions closest eks cluster private ingress
- private regional -> www.eu-west-2.product.lan -> goes to regions eks cluster private ingress
The headache is that at the moment, is that I would need to create 4 seperate ingresses for each app we expose in order to setup the dns with external dns. We have loads of apps. Then I would create 2 external dns instances to create the required dns entries.
What I would like is to rather create 4 external dns instances and only have 2 ingresses per an application, a private and public one. The problem is that I need to add a geo annotation for the public global and private global address which means that the external dns instances for public regional and private regional would turn the regional dns entries into geo entries.
It would be nice to have some kind of namespacing mechanism on the annotations which can be scoped to an external dns instance or be able to filter annotations per a external dns instance. Annotations namespace would allow for all kinds of advanced setups.
Annotation namespace
ingress:
annotations:
external-dns.alpha.kubernetes.io/aws-geolocation-country-code: "<namespace>|GB"
command line arg:
--namespace='<namespace>'
Filter annotations
command line arg:
--ignore-annotations='external-dns.alpha.kubernetes.io/aws-geolocation-continent-code,external-dns.alpha.kubernetes.io/aws-geolocation-country-code'
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.