external-dns
external-dns copied to clipboard
Consider adding per source configurations
What would you like to be added:
Currently external-dns shares CLI configuration between all sources. For instance node, service, ingress sources all use --label-filter CLI flag. This makes it quite unergonomic to run, because these k8s objects are different in nature and achieving a common set of labels among them is not trivial.
Would you be open to a PR which introduced source-specific version of existing flags?
For a start I proposse following:
$source-label-filter$source-namespace$source-fqdn-template$source-annotation-filter
To maintain backward compatibility all sources continue to use existing "global" CLI flags unless source specific is given. That is for these flags:
-source=service -source=ingress -label-filter=inglabel=true -service-label-filter=svclabel=true
ingress source will run with inglabel=true label filter and service source will use svclabel=true.
Why is this needed:
Current workaround is to run multiple external-dns instances with different sources and ensure there is no overlapping between DNS entries they are trying to to manage. It is a well supported mode, but it increases complexity of deployment, requires more resources , more prone to API rate limiting on both K8S and DNS provider sides.
@redbaron I think we have already (a lot) of flags.
Without thinking too much about it, as a user, I'll probably prefer to use a config file or a CR.
I'll probably prefer to use a config file or a CR.
Me too, but designing config format is a massive undertaking given the versatility of external-dns, I have no hopes it can be landed soon-ish. On the other hand CLI pattern is already established so extending it with the schema above seems within reach.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Guys, do you conceptually agree or disagree with proposed change?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.