ingress-nginx icon indicating copy to clipboard operation
ingress-nginx copied to clipboard

Pointing to an ExternalName service without a DNS record can overload the DNS service

Open lucianjon opened this issue 4 years ago • 22 comments

NGINX Ingress controller version: v0.41.2

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.10", GitCommit:"f3add640dbcd4f3c33a7749f38baaac0b3fe810d", GitTreeState:"clean", BuildDate:"2020-05-20T14:00:52Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9", GitCommit:"94f372e501c973a7fa9eb40ec9ebd2fe7ca69848", GitTreeState:"clean", BuildDate:"2020-09-16T13:47:43Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: kops managed cluster on AWS
  • OS (e.g. from /etc/os-release):
Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.1 LTS
Release:	20.04
Codename:	focal
  • Kernel (e.g. uname -a): Linux ip-10-60-10-234 5.4.0-1024-aws #24-Ubuntu SMP Sat Sep 5 06:19:55 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

What happened:

If an ingress definition is created that points to an ExternalName service, which in turn produces a DNS lookup error, an endless loop of DNS requests is created that can bring the system down.

We noticed this when migrating from v0.19.0 -> v0.41.2, we have both controllers running in parallel. One of our teams was prepping for this and creating routes that pointed to yet to be created DNS records. It appears the old controllers were unaffected but there was huge amounts of DNS lookups generated by the routes on the new controller. It doesn't require actual requests to the routes, just creating the ingress and service definition is enough.

Eventually this overwhelmed dnsmasq and brought down our cluster's DNS, the concurrent requests were limited by dnsmasq but we were looking at thousands of requests per second. Was there some behaviour change between the two versions that could introduce this behaviour and is this expected? My naive guess is there would typically be some kind of exponential backoff on a DNS lookup error.

This is the error produced by the controller:

2020/11/25 20:18:52 [error] 1707#1707: *51723 [lua] dns.lua:152: dns_lookup(): failed to query the DNS server for foo.unknown.com:
server returned error code: 3: name error
server returned error code: 3: name error, context: ngx.timer

What you expected to happen:

DNS lookup failures to be handled with some form of backoff.

How to reproduce it:

These two definitions should be enough to reproduce the issue, assuming a proper class and namespace:

---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: dns-issue-repro
  namespace: default
  annotations:
    kubernetes.io/ingress.provider: "nginx"
    kubernetes.io/ingress.class: "external"
spec:
  rules:
    - host: foo.unknown.com
      http:
        paths:
          - path: /
            backend:
              serviceName: bad-svc
              servicePort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: bad-svc
  namespace: default
spec:
  type: ExternalName
  externalName: foo.unknown.com

/kind bug

lucianjon avatar Nov 25 '20 20:11 lucianjon

The behavior changed here https://github.com/kubernetes/ingress-nginx/pull/4671

aledbf avatar Nov 25 '20 20:11 aledbf

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Feb 23 '21 21:02 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

fejta-bot avatar Mar 25 '21 21:03 fejta-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

fejta-bot avatar Apr 24 '21 22:04 fejta-bot

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 24 '21 22:04 k8s-ci-robot

Yep, this just got me too while working on a new cluster. Nginx Ingress essentially DOSed CoreDNS, which caused all kinds of wierdness in the cluster.

Edit: Running k8s.gcr.io/ingress-nginx/controller:v1.1.1

adamcharnock avatar Feb 15 '22 17:02 adamcharnock

I am getting this issue too.

Running k8s.gcr.io/ingress-nginx/controller:v1.1.0

unnikm8 avatar Feb 18 '22 14:02 unnikm8

I'm also affected by this issue. Hope on some activity on it. /reopen

VsevolodSauta avatar Apr 06 '22 09:04 VsevolodSauta

@VsevolodSauta: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

I'm also affected by this issue. Hope on some activity on it. /reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 06 '22 09:04 k8s-ci-robot

I'm also getting this issue: k8s.gcr.io/ingress-nginx/controller:v1.2.0

javimosch avatar Sep 22 '22 15:09 javimosch

Same issue.

karlhaworth avatar Sep 23 '22 12:09 karlhaworth

Why is this closed?

I am also seeing the same issue - has anyone here resolved it or has a workaround?

dexterlakin-bdm avatar Oct 03 '22 11:10 dexterlakin-bdm

Even without kubernetes, if a process makes calls to unresolvable hostname in a infinite loop, then there will be impact.

Thanks, ; Long

On Mon, 3 Oct, 2022, 4:32 PM dexterlakin-bdm, @.***> wrote:

Why is this closed?

I am also seeing the same issue - has anyone here resolved it or has a workaround?

— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/6523#issuecomment-1265275394, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWUGAXBN7GR4KZYGANLWBK4L7ANCNFSM4UC4AJOQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>

longwuyuan avatar Oct 03 '22 11:10 longwuyuan

Same issue dns.lua:152: dns_lookup(): failed to query the DNS server for

sravanakinapally avatar Oct 07 '22 19:10 sravanakinapally

I am also having the same issue with v1.3.1 in some clusters

ahmad-sharif avatar Oct 11 '22 05:10 ahmad-sharif

Same problem, keep watching

qixiaobo avatar Feb 07 '23 12:02 qixiaobo

+1

alv91 avatar Mar 10 '23 12:03 alv91

We are experiencing identical issues on both GKE and AKS clusters while using ingress-nginx versions 1.9.1 and 1.9.3.

Occasionally, we encounter situations where the backend resides outside the cluster. The "ExternalName" record is dynamically resolved using endpoints controlled by Consul. However, if it happens to be a single backend service or the last one, and it deregisters due to reasons such as a reboot, the "ExternalName" encounters a non-existing CNAME record. This, in turn, causes ingress-nginx to goes completely crazy with such errors:

2023/10/26 18:16:18 [error] 432#432: *18134 [lua] dns.lua:152: dns_lookup(): failed to query the DNS server for my-not-existing-record.example.com:
server returned error code: 3: name error
server returned error code: 3: name error, context: ngx.timer

In situations where there are only a few occurrences, this behavior can sometimes be obscured by the sheer volume of logs. However, when a substantial number of endpoints become unreachable all at once, compounded by the current scale of Ingress-NGINX pods (which, in our scenario, includes both internal and external-facing ingress classes), the problem escalates significantly and places a severe burden on our coreDNS server, potentially overwhelming them.

What I would like to see is a restriction on the number of resolve attempts / limiting resolve-retry rates or, even more desirable, the implementation of a back-off mechanism.

fuog avatar Oct 26 '23 18:10 fuog

We're experiencing same behavior. With a few 'invalid' or 'temporaty invalid' svc ExternalName backend configurations we noticed a tons of messages like this and huge amount of DNS calls.

We tested the same scenario with traefik as a ingress controller - no issue at all, just 502 response on the client call.

mjozefcz avatar Oct 31 '23 15:10 mjozefcz

/reopen

tao12345666333 avatar Feb 17 '24 10:02 tao12345666333

@tao12345666333: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 17 '24 10:02 k8s-ci-robot

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 17 '24 10:02 k8s-ci-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 18 '24 11:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 18 '24 11:03 k8s-ci-robot