external-dns icon indicating copy to clipboard operation
external-dns copied to clipboard

I want to overwrite the published IP for InternalServices

Open soupdiver opened this issue 5 years ago • 24 comments

My setup: I use traefik as an IngressController.

This gives me a LoadBalancer type Service as primary ingress for my cluster. Where 1.2.3.4 is the correct external IP address.

kubectl get service
traefik                         LoadBalancer   10.7.241.80    1.2.3.4

As a provider in traefik I use kubernetescrd and their IngressRoute CRD. This way all my service end up with type ClusterIP and therefor won't be exposed by external-dns. I set publishInternalServices: true and this made external-dns create DNS records for my services.

But it uses their cluster-internal IPs, which is of little use from outside the cluster.

I would like to be able to set the external IP address of my traefik service as IP address for the DNS records.

I tried to set annotation on my Service

external-dns.alpha.kubernetes.io/hostname: foo.bar
external-dns.alpha.kubernetes.io/target: 1.2.3.4

I think external-dns.alpha.kubernetes.io/target: only works when using ingress as source. I use service 🤔

Could be related to: https://github.com/kubernetes-sigs/external-dns/issues/1394

soupdiver avatar Feb 29 '20 23:02 soupdiver

+1, s/Traefik/Gloo/. It's exactly the same issue - the name is associated with a service, but the service's ingress IP is not directly associated with that service.

JorjBauer avatar Mar 24 '20 14:03 JorjBauer

I dont know if its the same problem but somehow external-dns published txt records (pointing to the ingress) AND A records (pointing to the internal IP).

As the A Record somehow has precendence, my DNS published the in iternal IP which is of no use from outside. I dont want the private ips published to my DNS Zone.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
  name: dev

spec:
  rules:
    - host: api.foo.de
      http:
        paths:
          - backend:
              serviceName: api
              servicePort: 80
            path: /
    - host: foo.de
      http:
        paths:
          - backend:
              serviceName: app
              servicePort: 80
            path: /
    - host: cms.foo.de
      http:
        paths:
          - backend:
              serviceName: cms
              servicePort: 80
            path: /
  tls:
    - hosts:
        - foo.de
        - cms.foo.de
        - api.foo.de
      secretName: dev-certs

im using designate as external DNS.

~ master * ❯ openstack recordset list foo.de.
+--------------------------------------+------------------------+------+-----------------------------------------------------------------------------------------------+--------+--------+
| id                                   | name                   | type | records                                                                                       | status | action |
+--------------------------------------+------------------------+------+-----------------------------------------------------------------------------------------------+--------+--------+
| xxx | foo.de.     | TXT  | "heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=ingress/dev/dev" | ACTIVE | NONE   |
| xxx | cms.foo.de. | TXT  | "heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=ingress/dev/dev" | ACTIVE | NONE   |
| xxx | api.foo.de. | TXT  | "heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=ingress/dev/dev" | ACTIVE | NONE   |
| xxx | cms.foo.de. | A    | 192.168.1.16                                                                                  | ACTIVE | NONE   |
| xxx | api.foo.de. | A    | 192.168.1.16                                                                                  | ACTIVE | NONE   |
| xxx | foo.de.     | A    | 192.168.1.16                                                                                  | ACTIVE | NONE   |
+--------------------------------------+------------------------+------+-----------------------------------------------------------------------------------------------+--------+--------+

whats wrong here? which part to tell to dont publish those A records? it worked when im using Service and external-dns but somehow not with Ingress

digitalkaoz avatar Mar 26 '20 21:03 digitalkaoz

+1 There are enough use-cases for a non-Loadbalancer service (ClusterIP or NodePort) which leads to DNS Records with a private IP to overwrite that IP and have control. In my case I use Nginx-Ingress as NodePort because I need control on my loadbalancer outside of EKS (Terraform). Everything works, by all DNS records point to an internal IP. So in my nginx-ingress service I would like to have a annotation to do that e.g. external-dns.alpha.kubernetes.io/PublishedDNS: ${CustomLoadbalancerDNS} and or PublishedIP

atamgp avatar Apr 08 '20 20:04 atamgp

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jul 07 '20 20:07 fejta-bot

I would also very much so like to see NodePort supported. Recently switched from MetalLB to calico now that it supports advertising the ExternalIP service address to BGP peers: https://docs.projectcalico.org/networking/advertise-service-ips#advertise-service-external-IP-addresses

Any updates on this?

carpenike avatar Jul 29 '20 17:07 carpenike

Well, I found an ugly workaround. Create a 'dummy' ExternalName service that points to the IP address for the service you'd like to run along with the annotation that points to your domain name:

apiVersion: v1
kind: Service
metadata:
  name: plex-dns 
  annotations:
    external-dns.alpha.kubernetes.io/hostname: plex.DOMAIN.
spec:
  type: ExternalName
  externalName: 10.45.100.100

carpenike avatar Jul 29 '20 17:07 carpenike

/remove-lifecycle stale

seanmalloy avatar Aug 17 '20 14:08 seanmalloy

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Nov 15 '20 14:11 fejta-bot

/remove-lifecycle stale

unixfox avatar Nov 15 '20 16:11 unixfox

Any way to help with this issue? @unixfox

soupdiver avatar Nov 17 '20 08:11 soupdiver

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Feb 15 '21 08:02 fejta-bot

/remove-lifecycle stale

fiskhest avatar Feb 15 '21 09:02 fiskhest

Same issue here

Internal services are seen by ED thanks to --publish-internal-services parameter. But external-dns.alpha.kubernetes.io/target annotation is not taken into account. Instead the internal IP is recorded in the target DNS.

Taking into account the annotation would unblock many issues here.

kheraud avatar Apr 01 '21 09:04 kheraud

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jun 30 '21 09:06 fejta-bot

/remove-lifecycle stale

unixfox avatar Jun 30 '21 09:06 unixfox

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 28 '21 09:09 k8s-triage-robot

/remove-lifecycle stale

unixfox avatar Sep 28 '21 10:09 unixfox

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 27 '21 10:12 k8s-triage-robot

/remove-lifecycle stale

Tristan971 avatar Jan 08 '22 12:01 Tristan971

jfyi it'd be really neat indeed; in our environment we have a non-public LB and the ingress controller of the cluster is merely NodePort based yet I'd like to have:

  • {{ cluster-internal-fqdn }} pointing to the cluster worker LB IP(s)
  • {{ some-ingress/service }} CNAME {{ cluster-internal-fqdn }} for my regular ingresses, and while I could set those to use the LB's IPs, that's quite noisy compared to a CNAME to a single IP-based record for cluster's ingress

Tristan971 avatar Jan 08 '22 12:01 Tristan971

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 08 '22 12:04 k8s-triage-robot

/remove-lifecycle stale

Tristan971 avatar Apr 08 '22 12:04 Tristan971

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 07 '22 13:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 06 '22 13:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 05 '22 13:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 05 '22 13:09 k8s-ci-robot

This issue is still present. Enabling external-dns.alpha.kubernetes.io/target for services would solve the issue...

QcFe avatar Sep 13 '22 10:09 QcFe

/reopen

QcFe avatar Sep 13 '22 10:09 QcFe

@QcFe: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 13 '22 10:09 k8s-ci-robot