external-dns
external-dns copied to clipboard
feat: resolve dns targets in ingress based records to create A/AAAA instead of CNAME
Description
Similar feature to the one added in https://github.com/kubernetes-sigs/external-dns/pull/3554 but for ingress based records. Turned off by default, if the flag is set it will resolve the target dns record, and create A/AAAA instead of CNAME.
Checklist
- [X] Unit tests updated
- [X] End user documentation updated
Welcome @n-marton!
It looks like this is your first PR to kubernetes-sigs/external-dns 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/external-dns has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @n-marton. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign raffo for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@n-marton: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.
In response to this:
/ok-to-test
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Thanks for this PR @n-marton ! Would you please detail your use case ? Why you need this feature ?. You'll need to add some documentation on this feature, :thinking: maybe on this page. /ok-to-test
sure thing: so in cases, where you have an ingress controller listening on proxy protocol, but you also have to reach said ingress controller from inside the cluster, you actually have to work around the fact, that if the LoadBalancer type service has an IP, the traffic will never leave the cluster, now this is easily fixable if your LoadBalancer solution supports FQDNs as LoadBalancer status lets say IP octets separated by dashes under some domain, now these FQDNs don't necessary have to exist, you might resolve them via a Lua script with an ease, but in this case authority can be a problem, so much easier to do A/AAAA records then CNAMEs.
Added a line into the docs.
/ok-to-test
@n-marton why you do not use a similar setup as described in https://opensource.zalando.com/skipper/kubernetes/ingress-controller/#run-as-api-gateway-with-east-west-setup for cluster local traffic to the ingress controller? It's simple and you do not rely on public DNS, which is more prone to error than the explained internal one.
@n-marton why you do not use a similar setup as described in https://opensource.zalando.com/skipper/kubernetes/ingress-controller/#run-as-api-gateway-with-east-west-setup for cluster local traffic to the ingress controller? It's simple and you do not rely on public DNS, which is more prone to error than the explained internal one.
maybe I am missing something, but I don't get how this would solve the proxy protocol listening problem
it was also referred here in the loadbalancer thread that it would also be needed to be done for ingresses
@szuecs @mloiseleur anything else required on this?
I'm not sure to totally follow your use case :thinking:. Anyway, the code looks good and the feature is documented. /lgtm
/assign @Raffo
any update here, anything we can help with to move this forward?
/unassign @Raffo
https://github.com/kubernetes-sigs/external-dns/pull/4255#issuecomment-1983403843 said use case is unclear, so I would say we do not accept this PR right now.
For me it's more a feature request to your ingress controller that can orchestrate external-dns as they wish.
/unassign @Raffo
#4255 (comment) said use case is unclear, so I would say we do not accept this PR right now.
For me it's more a feature request to your ingress controller that can orchestrate external-dns as they wish.
even in the original PR https://github.com/kubernetes-sigs/external-dns/pull/3554 that added the same functionality for Services types (and was also lgtm -ed by you), there was a discussion about doing the same for Ingresses in a second PR, that nobody followed up, if you are against this then could you please also leave a comment on that original PR that this feature is not encouraged anymore for ingresses, so no one else will waste time on it
One more usecase:
lets say you have an ingress object like this
NAME CLASS HOSTS ADDRESS PORTS AGE
hubble-ingress <none> hubble.example.com 10.10.10.100 80, 443 30d
if the loadbalancer backing the ingress controller next gets an fqdn instead of an IP and it will generate the following ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
hubble-ingress <none> hubble.example.com 10-10-10-100.some.domain 80, 443 30d
in this case, external-dns will error you because hubble.example.com already exist as an A record so it would conflict with a CNAME it would like to create
One more usecase:
lets say you have an ingress object like this
NAME CLASS HOSTS ADDRESS PORTS AGE hubble-ingress <none> hubble.example.com 10.10.10.100 80, 443 30dif the loadbalancer backing the ingress controller next gets an fqdn instead of an IP and it will generate the following ingress
NAME CLASS HOSTS ADDRESS PORTS AGE hubble-ingress <none> hubble.example.com 10-10-10-100.some.domain 80, 443 30din this case, external-dns will error you because
hubble.example.comalready exist as an A record so it would conflict with a CNAME it would like to create
That's what I said: it's a bug in the ingress controller.
One more usecase: lets say you have an ingress object like this
NAME CLASS HOSTS ADDRESS PORTS AGE hubble-ingress <none> hubble.example.com 10.10.10.100 80, 443 30dif the loadbalancer backing the ingress controller next gets an fqdn instead of an IP and it will generate the following ingress
NAME CLASS HOSTS ADDRESS PORTS AGE hubble-ingress <none> hubble.example.com 10-10-10-100.some.domain 80, 443 30din this case, external-dns will error you because
hubble.example.comalready exist as an A record so it would conflict with a CNAME it would like to createThat's what I said: it's a bug in the ingress controller.
One could argue that it is easier to solve it on external dns level, than get it into each ingress controller, just checking a few more popular ones:
- https://github.com/haproxytech/kubernetes-ingress/blob/master/pkg/ingress/status.go#L14
- https://github.com/traefik/traefik/blob/master/pkg/provider/kubernetes/ingress/kubernetes.go#L375
- https://github.com/kubernetes/ingress-nginx/blob/main/cmd/plugin/commands/ingresses/ingresses.go#L135
However I get the feeling the this wont be ever merged as is, no matter the argument.
Just to stop more running in circles, I would be also fine using CNAMEs, if external-dns would be able to handle the transition between A and CNAME records, if I would make a PR about that feature, would that be suppoted?
/hold
@szuecs could you please also answer to Marton so we can move this forward, one way or another? Thanks!
@szuecs Could you help us out here and/or elaborate whether allowing transition between A and CNAME is ok here?
Just to stop more running in circles, I would be also fine using CNAMEs, if external-dns would be able to handle the transition between A and CNAME records, if I would make a PR about that feature, would that be suppoted?
This is blocking a few things for us, please if we cannot proceed close this issue and let us know.