I want to overwrite the published IP for InternalServices
My setup:
I use traefik as an IngressController.
This gives me a LoadBalancer type Service as primary ingress for my cluster. Where 1.2.3.4 is the correct external IP address.
kubectl get service
traefik LoadBalancer 10.7.241.80 1.2.3.4
As a provider in traefik I use kubernetescrd and their IngressRoute CRD.
This way all my service end up with type ClusterIP and therefor won't be exposed by external-dns.
I set publishInternalServices: true and this made external-dns create DNS records for my services.
But it uses their cluster-internal IPs, which is of little use from outside the cluster.
I would like to be able to set the external IP address of my traefik service as IP address for the DNS records.
I tried to set annotation on my Service
external-dns.alpha.kubernetes.io/hostname: foo.bar
external-dns.alpha.kubernetes.io/target: 1.2.3.4
I think external-dns.alpha.kubernetes.io/target: only works when using ingress as source. I use service 🤔
Could be related to: https://github.com/kubernetes-sigs/external-dns/issues/1394
+1, s/Traefik/Gloo/. It's exactly the same issue - the name is associated with a service, but the service's ingress IP is not directly associated with that service.
I dont know if its the same problem but somehow external-dns published txt records (pointing to the ingress) AND A records (pointing to the internal IP).
As the A Record somehow has precendence, my DNS published the in iternal IP which is of no use from outside. I dont want the private ips published to my DNS Zone.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
name: dev
spec:
rules:
- host: api.foo.de
http:
paths:
- backend:
serviceName: api
servicePort: 80
path: /
- host: foo.de
http:
paths:
- backend:
serviceName: app
servicePort: 80
path: /
- host: cms.foo.de
http:
paths:
- backend:
serviceName: cms
servicePort: 80
path: /
tls:
- hosts:
- foo.de
- cms.foo.de
- api.foo.de
secretName: dev-certs
im using designate as external DNS.
~ master * ❯ openstack recordset list foo.de.
+--------------------------------------+------------------------+------+-----------------------------------------------------------------------------------------------+--------+--------+
| id | name | type | records | status | action |
+--------------------------------------+------------------------+------+-----------------------------------------------------------------------------------------------+--------+--------+
| xxx | foo.de. | TXT | "heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=ingress/dev/dev" | ACTIVE | NONE |
| xxx | cms.foo.de. | TXT | "heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=ingress/dev/dev" | ACTIVE | NONE |
| xxx | api.foo.de. | TXT | "heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=ingress/dev/dev" | ACTIVE | NONE |
| xxx | cms.foo.de. | A | 192.168.1.16 | ACTIVE | NONE |
| xxx | api.foo.de. | A | 192.168.1.16 | ACTIVE | NONE |
| xxx | foo.de. | A | 192.168.1.16 | ACTIVE | NONE |
+--------------------------------------+------------------------+------+-----------------------------------------------------------------------------------------------+--------+--------+
whats wrong here? which part to tell to dont publish those A records? it worked when im using Service and external-dns but somehow not with Ingress
+1 There are enough use-cases for a non-Loadbalancer service (ClusterIP or NodePort) which leads to DNS Records with a private IP to overwrite that IP and have control. In my case I use Nginx-Ingress as NodePort because I need control on my loadbalancer outside of EKS (Terraform). Everything works, by all DNS records point to an internal IP. So in my nginx-ingress service I would like to have a annotation to do that e.g. external-dns.alpha.kubernetes.io/PublishedDNS: ${CustomLoadbalancerDNS} and or PublishedIP
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
I would also very much so like to see NodePort supported. Recently switched from MetalLB to calico now that it supports advertising the ExternalIP service address to BGP peers: https://docs.projectcalico.org/networking/advertise-service-ips#advertise-service-external-IP-addresses
Any updates on this?
Well, I found an ugly workaround. Create a 'dummy' ExternalName service that points to the IP address for the service you'd like to run along with the annotation that points to your domain name:
apiVersion: v1
kind: Service
metadata:
name: plex-dns
annotations:
external-dns.alpha.kubernetes.io/hostname: plex.DOMAIN.
spec:
type: ExternalName
externalName: 10.45.100.100
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Any way to help with this issue? @unixfox
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Same issue here
Internal services are seen by ED thanks to --publish-internal-services parameter. But external-dns.alpha.kubernetes.io/target annotation is not taken into account. Instead the internal IP is recorded in the target DNS.
Taking into account the annotation would unblock many issues here.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
jfyi it'd be really neat indeed; in our environment we have a non-public LB and the ingress controller of the cluster is merely NodePort based yet I'd like to have:
- {{ cluster-internal-fqdn }} pointing to the cluster worker LB IP(s)
- {{ some-ingress/service }} CNAME {{ cluster-internal-fqdn }} for my regular ingresses, and while I could set those to use the LB's IPs, that's quite noisy compared to a CNAME to a single IP-based record for cluster's ingress
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This issue is still present. Enabling external-dns.alpha.kubernetes.io/target for services would solve the issue...
/reopen
@QcFe: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.