external-dns
external-dns copied to clipboard
Domain filter on ingress rule, but FQDN template for name generation
STOP -- PLEASE READ!
GitHub is not the right place for support requests.
> It is the only place I have.
If you're looking for help, check our [docs](https://github.com/kubernetes-sigs/external-dns/tree/HEAD/docs).
> The docs miss explaining a lot of the concepts that I already tried to understand from the code
You can also post your question on the [Kubernetes Slack #external-dns](https://kubernetes.slack.com/archives/C771MKDKQ).
> I do not have access to it
Current setup
I am trying a unique configuration due to my split DNS resolver setup. Here is the external-dns config:
image: k8s.gcr.io/external-dns/external-dns:v0.10.2
args:
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=kratos-cluster
- --provider=rfc2136
- --rfc2136-host=10.10.10.3
- --rfc2136-port=53535
- --rfc2136-zone=rpz
- --rfc2136-tsig-secret-alg=hmac-sha256
- --rfc2136-tsig-keyname=externaldns
- --rfc2136-tsig-secret=***REDACTED***
- --rfc2136-tsig-axfr
- --source=ingress
- --ignore-ingress-rules-spec
- --combine-fqdn-annotation
- --fqdn-template={{.Name}}.adyanth.site.rpz
- --domain-filter=adyanth.site.rpz
- --log-level=debug
The target is a bind9 RPZ zone.
What is working
After a day inside the source code, I understood enough of the logic to use combine-fqdn-annotation, fqdn-template and domain-filter (which I feel is redundant right now?). Currently the above config generates FQDNs of the form
What I would like to achieve
I would only want external DNS to work on ingresses where the rules is of the form *.adyanth.site. (I do not want the host to be used for DNS, just for validation). So, if the ingress is test.adyanth.site. it should work as it is right now. But if the ingress is test.adyanth.lan, it should not. Currently both scenarios get a DNS entry.
What I think might work?
Can I use the template with conditions to check the rule? From what I understood, the fqdn template is a standard go template and is passed the Ingress spec for templating. Can I use a complex logic to determine if the hosts in Ingress match *.adyanth.site and then set the .rpz so that the fqdn-filter allows it through only in that case?
Is there a better way to achieve this?
my rpz is: override
with domain-filter: override and i was able to create the fqdn-override: {{.Annotations.hostname}}.override
so I just create an annotation like "hostname: servicename.mydomain.tld" and everything works as expected. The only problem is to have log with "
Ohh, using the annotations for the fqdn-template is a brilliant idea! Yeah, would have to live with an empty override, but it should not cause any issue I think.
Yes, there has to be a better way to use this. What I propose is that the fqdn template applies at the end, after what the external-dns does, and before pushing the changes to the DNS server. Not sure if this does not consider any usecase though.
in my opinion, the best thing it would be to have flags like "rpz-enabled" and "rpz-domain" and if set it will automatically add the rpz-domain to the fqdn set in the service/ingress/crd without to set the merge and the template
Thanks again for that idea @spagno ! Found a way to get around the empty entry by using the following:
args:
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=amdpc-cluster
- --provider=rfc2136
- --rfc2136-host=10.10.10.3
- --rfc2136-port=53545
- --rfc2136-zone=rpz
- --rfc2136-tsig-secret-alg=hmac-sha256
- --rfc2136-tsig-keyname=externaldns
- --rfc2136-tsig-secret=***REDACTED***
- --rfc2136-tsig-axfr
- --source=ingress
- --combine-fqdn-annotation
- --fqdn-template={{or .Annotations.dns "invalid"}}.adyanth.site.rpz
- --domain-filter=adyanth.site.rpz
- --exclude-domains=invalid.adyanth.site.rpz
- --log-level=debug
The or
template function sets the domain to invalid.domain.com
. Combining it with the --exclude-domains
to exclude that invalid subdomain, it works as expected when setting the annotation like dns: testing
to generate testing.adyanth.site.rpz
.
It still is a workaround and not a fix.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.