external-dns icon indicating copy to clipboard operation
external-dns copied to clipboard

No endpoints could be generated from ingress (when backing service is invalid)

Open ghostsquad opened this issue 5 years ago • 18 comments

What happened:

I noticed that external dns was not creating a route53 entry in AWS for an ingress that I created. I later found out that part of the ingress was malformed, in that it was pointing to a service that didn't exist. The ingress to be created is a ALB Ingress, which was successfully performed.

The following log statement is what triggered that:

time="2020-06-30T22:52:12Z" level=debug msg="No endpoints could be generated from ingress my-namespace/my-app"

What you expected to happen:

Despite an invalid backing service to the ingress, external-dns should have still creating a entry that pointed to the ALB.

How to reproduce it (as minimally and precisely as possible):

I believe (untested), if you create an ingress, that points to a non-existent service, this can be reproduced.

Anything else we need to know?:

Environment:

  • External-DNS version (use external-dns --version): 0.7.1
  • DNS provider: AWS Route53
  • Others: Kubernetes 1.14

ghostsquad avatar Jun 30 '20 23:06 ghostsquad

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Sep 29 '20 00:09 fejta-bot

Not stale

ghostsquad avatar Sep 29 '20 01:09 ghostsquad

/remove-lifecycle stale

seanmalloy avatar Sep 29 '20 03:09 seanmalloy

Have the same issue:

time="2020-10-28T01:57:52Z" level=debug msg="No endpoints could be generated from service ns1/go-app"
time="2020-10-28T01:57:52Z" level=debug msg="No endpoints could be generated from ingress ns1/go-app"

no idea how does it come to that conclusion when the endpoints clearly exist:

$ kc describe svc -n ns1 go-app | grep Endpoints
Endpoints:         100.117.196.198:8081,100.117.234.8:8081

$ kc get endpoints -n ns1 --show-labels
NAME     ENDPOINTS                                 AGE   LABELS
go-app   100.117.196.198:8081,100.117.234.8:8081   17h   app=go-app,dns=route53,name=go-app

This is k8s v1.18.10 and image k8s.gcr.io/external-dns/external-dns:v0.7.4

igoratencompass avatar Oct 28 '20 02:10 igoratencompass

Dito, anyone got any idea how to get external DNS working, I've followed the instructions here...https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/azure.md

NBroomfield avatar Jan 25 '21 12:01 NBroomfield

same issue

time="2021-03-03T08:44:08Z" level=debug msg="No endpoints could be generated from ingress harbor/harbor-harbor-ingress"
time="2021-03-03T08:44:08Z" level=debug msg="No endpoints could be generated from ingress harbor/harbor-harbor-ingress-notary"
time="2021-03-03T08:44:08Z" level=debug msg="No endpoints could be generated from ingress jaeger/jaeger-ingress"

YuhuaDeng avatar Mar 03 '21 08:03 YuhuaDeng

The generation of endpoints seems to be directly tied to the Ingress having an "Address". In my case, I was able to get this working by setting by my ingress controller to publish its address to the ingress resources

For traefik: kubernetesIngress.publishedService.enabled=true

For nginx: controller.publishService.enabled=true

adamday2 avatar Apr 05 '21 22:04 adamday2

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jul 17 '21 01:07 fejta-bot

/remove-lifecycle stale

ghostsquad avatar Jul 17 '21 03:07 ghostsquad

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 15 '21 04:10 k8s-triage-robot

/remove-lifecycle stale

ghostsquad avatar Nov 03 '21 02:11 ghostsquad

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 01 '22 02:02 k8s-triage-robot

/remove-lifecycle stale

ghostsquad avatar Feb 01 '22 05:02 ghostsquad

/lifecycle frozen

MadhavJivrajani avatar Feb 01 '22 05:02 MadhavJivrajani

Perhaps you don't have the ingress controller installed. To install it you need to run the following command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml

tadrian88 avatar Aug 04 '22 11:08 tadrian88

for me the problem was with the controller, basically it wasn't adding the routing info for my host should be pointing. Thanks @tadrian88

farazoman avatar Jul 21 '23 18:07 farazoman

I was running into a similar problem. In my case, I'm using aws-load-balancer-controller, and the controller was failing to create my load balancer because of an invalid certificate. It would be nice if external-dns gave a slightly nicer error in the logs here, especially with debug logging turned on, but at least in my case it wasn't external-dns's fault.

jwalton avatar Sep 20 '23 19:09 jwalton