external-dns
external-dns copied to clipboard
Feature to resolve Ingress source hostnames to IP
What would you like to be added: Feature to resolve Ingress status.loadBalancer.ingress[0].hostname to IP, same as existing flag --resolve-service-load-balancer-hostname.
Why is this needed: To create A records instead of CNAME records for hostnames like 10.200.1.1.nip.io.
The Ingress status value status.loadBalancer.ingress[0].hostname can contain a nip.io FQDN like 10.200.1.1.nip.io in some ingress-nginx configurations, which leads to using nip.io CNAMEs instead of A records. This works correctly, as external-dns creates a CNAME record. However, having external-dns resolve this to an IP and create an A record would simplify the setup and not require resolving .nip.io records.
Example Ingress status:
apiVersion: networking.k8s.io/v1
kind: Ingress
name: my-ingress
spec:
ingressClassName: nginx
rules:
- host: example.org
http:
paths:
- backend:
service:
name: example-service
port:
name: http
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- hostname: 10.200.130.84.nip.io
external-dns in this case will create a CNAME record from example.org to 10.200.130.84.nip.io. This feature request is to resolve the hostname 10.200.130.84.nip.io to 10.200.130.84 and so create an example.org A record pointing to 10.200.130.84.
This does seem a valid scenario to be implemented.. @mhanc is it just nip.io that you are thinking of or is there any other usecase you can think of for the flag you are requesting? /assign
@ivankatliarchuk needed your take on this.. do you think this is within the scope of ex-dns?
@hjoshi123 As far as I know, only .nip.io addresses are used, but my view is limited to our Kubernetes clusters. I also dont have information, how this works with IPv6 or similar. However, I think that doing just DNS resolution would cover any other hostname values.
nip.io relates to powerdns. Could you share your arguments as well, not helm pls.
We need someone who familiar with powerdns, most likely have access or account there and able to reproduce the problem. I've no clue how pdns provider should behave.
@ivankatliarchuk this nip.io is not related to the pdns provider. It is related to openstack-cloud-controller-manager Load Balancer using enable-ingress-hostname, which is enabled by the option loadbalancer.openstack.org/proxy-protocol: "true" (docs). Proxy protocol is needed to be enabled on Load Balancers to see correct IPs in logs (log external IPs instead of cluster internal Pod IPs as origin of connections).
There was already an proposal to resolve this in https://github.com/kubernetes-sigs/external-dns/pull/2049.
This is also not exclusive to nip.io as any DNS suffix can be used by using the option ingress-hostname-suffix (default: nip.io). If external-dns resolves this record, it would work with all options. However, just trimming the nip.io suffix might also be a good option, to avoid the need for DNS resolution 🤔
@ivankatliarchuk For completeness, also sharing the requested arguments, but as noted, this applies to any provider.
external-dns --log-level=debug --log-format=text --interval=1m --source=ingress --policy=sync --registry=txt --txt-owner-id=example-org --namespace=example-org --domain-filter=example.org --provider=pdns --pdns-server=pdns.example.org --pdns-api-key=secret123
So I spend some time, trying to understand how to resolve that.
At the moment there are no way to resolve it, or I just not aware how to do it. I was playing with different annotations, there is a possibility to use at the moment
"external-dns.alpha.kubernetes.io/target": "10.200.130.84",- or instead of hostname to configure
loadBalancer:
ingress:
- ip: 10.200.130.84
I'm not sure any of these options will work, as they might require infrastructure changes or may not even be possible.
I was thinking about annotation support, something like
apiVersion: networking.k8s.io/v1
kind: Ingress
name: my-ingress
annotations:
"external-dns.alpha.kubernetes.io/fqdn-target": "{{ range .Status.LoadBalancer.Ingress }}{{ if contains .Hostname "nip.io" }}{{ extract-ip-from-host }}{{break}}{{end}}{{end}}",
spec:
ingressClassName: nginx
rules:
- host: example.org
http:
paths:
- backend:
service:
name: example-service
port:
name: http
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- hostname: 10.200.130.84.nip.io
What else considering
--resolve-ingress-load-balancer-hostname--resolve-target-fqdn-template- fqdn funciton
host
Why I'm thinkinkg about FQDN? Is just simpler to support/tweak/document single feature, instead of providing dozens annotations or flags. Still considering option. May provide pros/cons when ready.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.