ingress-nginx icon indicating copy to clipboard operation
ingress-nginx copied to clipboard

Timed out waiting for hostname/IP address when creating Ingrress

Open fbarrerafalabella opened this issue 3 years ago • 2 comments

What happened:

I am running a kubernetes cluster on GCP which is currently running 1.21 version of kubernetes and in order to update to 1.22 I updated my ingress-nginx from 3.34 to 4.2.5 since version 3.X is using apis that are deprecated on 1.22 kubernetes but with this version when I try to create certain ingress I get a timeout waiting for the .status.loadBalancer Is this a known issue? Do I have to update first to k8s 1.22 or am I missing something? Im deploying using helm chart with pulumi, also I'm using 2 controllers, 1 internal and 1 external This happens mainly on the external controller

What you expected to happen: Create the ingress and assign the right controller

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):


NGINX Ingress controller Release: v1.3.1 Build: 92534fa2ae799b502882c8684db13a25cde68155 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.19.10


Kubernetes version (use kubectl version): lient Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.14-gke.700", GitCommit:"1781919224b267c523fd76047cebf7b14c6aa1d9", GitTreeState:"clean", BuildDate:"2022-06-28T09:30:29Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: GCP
  • OS (e.g. from /etc/os-release): Google optimized
  • Kernel (e.g. uname -a):
  • Install tools:
    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
  • Basic cluster related info:
    • kubectl version Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.14-gke.700", GitCommit:"1781919224b267c523fd76047cebf7b14c6aa1d9", GitTreeState:"clean", BuildDate:"2022-06-28T09:30:29Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"}
    • kubectl get nodes -o wide
NAME                                        STATUS   ROLES    AGE     VERSION            INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-linio-general-1-e6adc66-0483ca1c-9xqu   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.95    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-0483ca1c-b4ia   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.51    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-0483ca1c-bzbi   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.96    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-0483ca1c-d3if   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.18    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-0483ca1c-qbyn   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.98    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-0483ca1c-r4jx   Ready    <none>   7h20m   v1.21.14-gke.700   10.128.0.135   <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-0483ca1c-st6h   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.87    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-0483ca1c-wj4v   Ready    <none>   46m     v1.21.14-gke.700   10.128.0.67    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-13e85020-55cw   Ready    <none>   46m     v1.21.14-gke.700   10.128.0.50    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-13e85020-6g91   Ready    <none>   7d2h    v1.21.14-gke.700   10.128.0.86    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-13e85020-7hha   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.93    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-13e85020-krak   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.91    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-13e85020-mej1   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.85    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-13e85020-uyg0   Ready    <none>   7d2h    v1.21.14-gke.700   10.128.0.90    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-13e85020-ynag   Ready    <none>   7d1h    v1.21.14-gke.700   10.128.0.92    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-general-1-e6adc66-13e85020-zxlu   Ready    <none>   7d2h    v1.21.14-gke.700   10.128.0.58    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-highmem-14ac7af-2779c2f1-0dz5     Ready    <none>   7d2h    v1.21.14-gke.700   10.128.0.89    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-highmem-14ac7af-2779c2f1-4b0t     Ready    <none>   7d2h    v1.21.14-gke.700   10.128.0.21    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-highmem-14ac7af-540bdf9d-3j6a     Ready    <none>   7d2h    v1.21.14-gke.700   10.128.0.78    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
gke-linio-highmem-14ac7af-540bdf9d-nc9h     Ready    <none>   7d2h    v1.21.14-gke.700   10.128.0.84    <none>        Container-Optimized OS from Google   5.4.188+         docker://20.10.3
  • How was the ingress-nginx-controller installed: Helm chart using pulumi
pulumi helmimport * as pulumi from '@pulumi/pulumi'; import * as k8s from '@pulumi/kubernetes'; import * as config from './config'; import { namespace } from './namespace'; import { ip as externalIpAddress } from './externalIpAddress'; import { defaultCertificate } from './defaultCertificate';

interface NginxConfiguration { internal: { replicas: { min: number; max: number; }; resources: k8s.types.input.core.v1.ResourceRequirements; }; external: { replicas: { min: number; max: number; }; resources: k8s.types.input.core.v1.ResourceRequirements; }; }

const nginx: NginxConfiguration = config.project.requireObject('deployment');

new k8s.helm.v3.Chart(${config.projectName}-internal, { chart: 'ingress-nginx', version: config.project.require('chartVersion'), fetchOpts: { repo: 'https://kubernetes.github.io/ingress-nginx', }, namespace: namespace.metadata.name, values: { controller: { ingressClass: 'ingress-nginx-v4-internal', ingressClassResource: { name: 'ingress-nginx-v4-internal', enabled: true, controllerValue: 'k8s.io/ingress-nginx-v4-internal', }, config: { 'enable-ocsp': 'true', 'ssl-session-tickets': 'false', 'http-snippet': `server { listen 18080;

      location /nginx_status {
        allow all;
        stub_status on;
      }

      location / {
        return 404;
      }
    }`,
    'enable-opentracing': 'true',
    'datadog-collector-host': '$DD_AGENT_HOST',
    'generate-request-id': 'true',
  },
  addHeaders: {
    'Strict-Transport-Security': 'max-age=63072000; includeSubDomains; preload',
    'X-Frame-Options': 'DENY',
    'X-Content-Type-Options': 'nosniff',
    'Referrer-Policy': 'no-referrer-when-downgrade',
  },
  autoscaling: {
    enabled: true,
    minReplicas: nginx.internal.replicas.min,
    maxReplicas: nginx.internal.replicas.max,
    targetMemoryUtilizationPercentage: 75,
    targetCPUUtilizationPercentage: 75,
  },
  service: {
    annotations: {
      'cloud.google.com/load-balancer-type': 'Internal',
    },
    externalTrafficPolicy: 'Local',
    loadBalancerIp: config.project.require('internalIp'),
  },
  publishService: {
    enabled: true,
  },
  extraEnvs: [
    {
      name: 'DD_AGENT_HOST',
      valueFrom: {
        fieldRef: {
          fieldPath: 'status.hostIP',
        },
      },
    },
  ],
  podAnnotations: {
    'cluster-autoscaler.kubernetes.io/safe-to-evict': 'true',
    'ad.datadoghq.com/nginx-ingress-controller.check_names': JSON.stringify(['nginx', 'nginx_ingress_controller']),
    'ad.datadoghq.com/nginx-ingress-controller.init_configs': JSON.stringify([{}, {}]),
    'ad.datadoghq.com/nginx-ingress-controller.instances': JSON.stringify([
      {
        nginx_status_url: 'http://%%host%%:18080/nginx_status',
      },
      {
        prometheus_url: 'http://%%host%%:10254/metrics',
      },
    ]),
    'ad.datadoghq.com/nginx-ingress-controller.logs': JSON.stringify([
      {
        service: config.projectName,
        source: 'nginx-ingress-controller',
      },
    ]),
    'config.linkerd.io/skip-inbound-ports': '80,443',
  },
  nodeSelector: {
    'node.k8s.linio.com/name': 'general',
  },
  extraArgs: {
    // eslint-disable-next-line @typescript-eslint/ban-ts-comment
    // @ts-ignore
    'default-ssl-certificate': pulumi.interpolate`${namespace.metadata.name}/${defaultCertificate.spec.secretName}`,
  },
  metrics: {
    enabled: true,
  },
  admissionWebhooks: {
    enabled: false,
  },
  resources: nginx.internal.resources,
},
defaultBackend: {
  enabled: false,
},

}, });

new k8s.helm.v3.Chart(${config.projectName}-external, { chart: 'ingress-nginx', version: config.project.require('chartVersion'), fetchOpts: { repo: 'https://kubernetes.github.io/ingress-nginx', }, namespace: namespace.metadata.name, values: { controller: { ingressClass: 'ingress-nginx-v4-external', ingressClassResource: { name: 'ingress-nginx-v4-external', enabled: true, controllerValue: 'k8s.io/ingress-nginx-v4-external', }, config: { 'enable-ocsp': 'true', 'ssl-session-tickets': 'false', 'http-snippet': `server { listen 18080;

      location /nginx_status {
        allow all;
        stub_status on;
      }

      location / {
        return 404;
      }
    }`,
    'enable-opentracing': 'true',
    'datadog-collector-host': '$DD_AGENT_HOST',
    'generate-request-id': 'true',
    'block-cidrs': config.project.getObject<string[]>('blockCidrs')?.join(','),
  },
  addHeaders: {
    'Strict-Transport-Security': 'max-age=63072000; includeSubDomains; preload',
    'X-Frame-Options': 'DENY',
    'X-Content-Type-Options': 'nosniff',
    'Referrer-Policy': 'no-referrer-when-downgrade',
  },
  autoscaling: {
    enabled: true,
    minReplicas: nginx.external.replicas.min,
    maxReplicas: nginx.external.replicas.max,
    targetMemoryUtilizationPercentage: 75,
    targetCPUUtilizationPercentage: 75,
  },
  service: {
    externalTrafficPolicy: 'Local',
    loadBalancerIp: externalIpAddress.address,
  },
  publishService: {
    enabled: true,
  },
  extraEnvs: [
    {
      name: 'DD_AGENT_HOST',
      valueFrom: {
        fieldRef: {
          fieldPath: 'status.hostIP',
        },
      },
    },
  ],
  podAnnotations: {
    'cluster-autoscaler.kubernetes.io/safe-to-evict': 'true',
    'ad.datadoghq.com/nginx-ingress-controller.check_names': JSON.stringify(['nginx', 'nginx_ingress_controller']),
    'ad.datadoghq.com/nginx-ingress-controller.init_configs': JSON.stringify([{}, {}]),
    'ad.datadoghq.com/nginx-ingress-controller.instances': JSON.stringify([
      {
        nginx_status_url: 'http://%%host%%:18080/nginx_status',
      },
      {
        prometheus_url: 'http://%%host%%:10254/metrics',
      },
    ]),
    'ad.datadoghq.com/nginx-ingress-controller.logs': JSON.stringify([
      {
        service: config.projectName,
        source: 'nginx-ingress-controller',
      },
    ]),
    'config.linkerd.io/skip-inbound-ports': '80,443',
  },
  nodeSelector: {
    'node.k8s.linio.com/name': 'general',
  },
  useComponentLabel: true,
  extraArgs: {
    // eslint-disable-next-line @typescript-eslint/ban-ts-comment
    // @ts-ignore
    'default-ssl-certificate': pulumi.interpolate`${namespace.metadata.name}/${defaultCertificate.spec.secretName}`,
  },
  metrics: {
    enabled: true,
  },
  resources: nginx.external.resources,
  admissionWebhooks: {
    enabled: false,
  },
},
defaultBackend: {
  enabled: false,
},

}, });

  • Current State of the controller:
    • kubectl describe ingressclasses
Name:         ingress-nginx-v4-external
Labels:       app.kubernetes.io/component=controller
             app.kubernetes.io/instance=ingress-nginx-v4-external
             app.kubernetes.io/managed-by=pulumi
             app.kubernetes.io/name=ingress-nginx
             app.kubernetes.io/part-of=ingress-nginx
             app.kubernetes.io/version=1.3.1
             helm.sh/chart=ingress-nginx-4.2.5
Annotations:  <none>
Controller:   k8s.io/ingress-nginx-v4-external
Events:       <none>


Name:         ingress-nginx-v4-internal
Labels:       app.kubernetes.io/component=controller
             app.kubernetes.io/instance=ingress-nginx-v4-internal
             app.kubernetes.io/managed-by=pulumi
             app.kubernetes.io/name=ingress-nginx
             app.kubernetes.io/part-of=ingress-nginx
             app.kubernetes.io/version=1.3.1
             helm.sh/chart=ingress-nginx-4.2.5
Annotations:  <none>
Controller:   k8s.io/ingress-nginx-v4-internal
Events:       <none>
  • kubectl -n ingress-nginx-v4 get all -A -o wide attached file tmp.txt
  • kubectl -n ingress-nginx-v4 describe po ingress-nginx-v4-internal-controller-d9949d746-wvz92
Name:         ingress-nginx-v4-internal-controller-d9949d746-wvz92
Namespace:    ingress-nginx-v4
Priority:     0

Node:         gke-linio-general-1-e6adc66-0483ca1c-r4jx/10.128.0.135
Start Time:   Wed, 21 Sep 2022 12:27:05 -0500
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx-v4-internal
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=d9949d746
Annotations:  ad.datadoghq.com/nginx-ingress-controller.check_names: ["nginx","nginx_ingress_controller"]
              ad.datadoghq.com/nginx-ingress-controller.init_configs: [{},{}]
              ad.datadoghq.com/nginx-ingress-controller.instances:
                [{"nginx_status_url":"http://%%host%%:18080/nginx_status"},{"prometheus_url":"http://%%host%%:10254/metrics"}]
              ad.datadoghq.com/nginx-ingress-controller.logs: [{"service":"ingress-nginx-v4","source":"nginx-ingress-controller"}]
              cluster-autoscaler.kubernetes.io/safe-to-evict: true
              config.linkerd.io/skip-inbound-ports: 80,443
Status:       Running
IP:           10.130.21.36
IPs:
  IP:           10.130.21.36
Controlled By:  ReplicaSet/ingress-nginx-v4-internal-controller-d9949d746
Containers:
  controller:
    Container ID:  docker://9f6794d285e42c989369e0f70ab557662b167109361eca13a1f5ce9f94a9046a
    Image:         registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Image ID:      docker-pullable://registry.k8s.io/ingress-nginx/controller@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Ports:         80/TCP, 443/TCP, 10254/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-v4-internal-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx-v4-internal
      --ingress-class=ingress-nginx-v4-internal
      --configmap=$(POD_NAMESPACE)/ingress-nginx-v4-internal-controller
      --default-ssl-certificate=ingress-nginx-v4/default-certificate
    State:          Running
      Started:      Wed, 21 Sep 2022 12:27:07 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   512Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-v4-internal-controller-d9949d746-wvz92 (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx-v4 (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
      DD_AGENT_HOST:   (v1:status.hostIP)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrj92 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-vrj92:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
                             node.k8s.linio.com/name=general
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  RELOAD  58m (x3 over 97m)  nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  • kubectl -n ingress-nginx-v4 describe po ingress-nginx-v4-external-controller-6f6f9d6844-8khng
Name:         ingress-nginx-v4-external-controller-6f6f9d6844-8khng
Namespace:    ingress-nginx-v4
Priority:     0
Node:         gke-linio-general-1-e6adc66-13e85020-55cw/10.128.0.50
Start Time:   Wed, 21 Sep 2022 13:35:19 -0500
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx-v4-external
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=6f6f9d6844
Annotations:  ad.datadoghq.com/nginx-ingress-controller.check_names: ["nginx","nginx_ingress_controller"]
              ad.datadoghq.com/nginx-ingress-controller.init_configs: [{},{}]
              ad.datadoghq.com/nginx-ingress-controller.instances:
                [{"nginx_status_url":"http://%%host%%:18080/nginx_status"},{"prometheus_url":"http://%%host%%:10254/metrics"}]
              ad.datadoghq.com/nginx-ingress-controller.logs: [{"service":"ingress-nginx-v4","source":"nginx-ingress-controller"}]
              cluster-autoscaler.kubernetes.io/safe-to-evict: true
              config.linkerd.io/skip-inbound-ports: 80,443
Status:       Running
IP:           10.130.9.55
IPs:
  IP:           10.130.9.55
Controlled By:  ReplicaSet/ingress-nginx-v4-external-controller-6f6f9d6844
Containers:
  controller:
    Container ID:  docker://6b3e97b44dbcaea4969d26d51b62f7d9a82c7432d1aa0d7c2ad6721d1dc68135
    Image:         registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Image ID:      docker-pullable://registry.k8s.io/ingress-nginx/controller@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Ports:         80/TCP, 443/TCP, 10254/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-v4-external-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx-v4-external
      --ingress-class=ingress-nginx-v4-external
      --configmap=$(POD_NAMESPACE)/ingress-nginx-v4-external-controller
      --default-ssl-certificate=ingress-nginx-v4/default-certificate
    State:          Running
      Started:      Wed, 21 Sep 2022 13:35:21 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   512Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-v4-external-controller-6f6f9d6844-8khng (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx-v4 (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
      DD_AGENT_HOST:   (v1:status.hostIP)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lcc27 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-lcc27:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
                             node.k8s.linio.com/name=general
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age                From                      Message
  ----    ------     ----               ----                      -------
  Normal  Scheduled  31m                default-scheduler         Successfully assigned ingress-nginx-v4/ingress-nginx-v4-external-controller-6f6f9d6844-8khng to gke-linio-general-1-e6adc66-13e85020-55cw
  Normal  Pulled     31m                kubelet                   Container image "registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974" already present on machine
  Normal  Created    31m                kubelet                   Created container controller
  Normal  Started    31m                kubelet                   Started container controller
  Normal  RELOAD     29m (x3 over 31m)  nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  • kubectl -n ingress-nginx-v4 describe svc ingress-nginx-v4-external-controller
Name:                     ingress-nginx-v4-external-controller
Namespace:                ingress-nginx-v4
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx-v4-external
                          app.kubernetes.io/managed-by=pulumi
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.3.1
                          helm.sh/chart=ingress-nginx-4.2.5
Annotations:              cloud.google.com/neg: {"ingress":true}
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-v4-external,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.132.152.20
IPs:                      10.132.152.20
LoadBalancer Ingress:     35.231.66.105
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  30102/TCP
Endpoints:                10.130.10.90:80,10.130.9.55:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  32412/TCP
Endpoints:                10.130.10.90:443,10.130.9.55:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     31013
Events:
  Type    Reason               Age                   From                Message
  ----    ------               ----                  ----                -------
  Normal  UpdatedLoadBalancer  31m (x270 over 5d9h)  service-controller  Updated load balancer with new hosts
  • kubectl -n ingress-nginx-v4 describe svc ingress-nginx-v4-internal-controller
Name:                     ingress-nginx-v4-internal-controller
Namespace:                ingress-nginx-v4
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx-v4-internal
                          app.kubernetes.io/managed-by=pulumi
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.3.1
                          helm.sh/chart=ingress-nginx-4.2.5
Annotations:              cloud.google.com/load-balancer-type: Internal
                          cloud.google.com/neg: {"ingress":true}
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-v4-internal,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.132.131.91
IPs:                      10.132.131.91
LoadBalancer Ingress:     10.128.0.12
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  30530/TCP
Endpoints:                10.130.21.36:80,10.130.3.70:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30605/TCP
Endpoints:                10.130.21.36:443,10.130.3.70:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     32697
Events:
  Type    Reason               Age                   From                Message
  ----    ------               ----                  ----                -------
  Normal  UpdatedLoadBalancer  33m (x263 over 5d9h)  service-controller  Updated load balancer with new hosts
  • Current state of ingress object, if applicable: ingress was not created

  • Others:

* the Kubernetes API server reported that "mailgun-events-pr-106/mailgun-events-v4-wu8wrsnu" failed to fully initialize or become live: 'mailgun-events-v4-wu8wrsnu' timed out waiting to be Ready
    	* Ingress .status.loadBalancer field was not updated with a hostname/IP address.

How to reproduce this issue: install external and internal ingress niginx v4.2.5 con a cluster running 1.21.14 kubernetes on gcp

fbarrerafalabella avatar Sep 21 '22 19:09 fbarrerafalabella

@fbarrerafalabella: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 21 '22 19:09 k8s-ci-robot

That is NOT as per deployment docs https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke Please install as per deployment docs and update status. /remove-kind bug

longwuyuan avatar Sep 22 '22 00:09 longwuyuan

Is it the same using helm? I see on the documentation this is another permitted medhod

fbarrerafalabella avatar Sep 23 '22 22:09 fbarrerafalabella

I updated the ticket with the results using the method found in the documentation, thanks

fbarrerafalabella avatar Sep 23 '22 22:09 fbarrerafalabella

It still says "helm install" so I don't think that the update after using the documented deploy process is correct. Because documeted deploy process does not use helm for GKE

longwuyuan avatar Sep 24 '22 02:09 longwuyuan

I see, then a doubt because I couldnt find anything about this on the link you sent me, how I can enable both internal and external controllers with the link you send me? I tried deploying with that link but when I get the services this is all I get

kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.132.242.214   34.148.183.123   80:30557/TCP,443:31288/TCP   71s
ingress-nginx-controller-admission   ClusterIP      10.132.206.26    <none>           443/TCP                      71s

This didn't solve the issue, in fact I cannot reproduce it anymore(just now instead of not assigning external ip, it cannot assign internals) because this will only be looking for the external traffic, not internal

fbarrerafalabella avatar Sep 25 '22 22:09 fbarrerafalabella

% helm template -n ingress-nginx ingress-nginx/ingress-nginx -f values.yaml| grep -B15  "LoadBalancer"
kind: Service
metadata:
  annotations:
    networking.gke.io/load-balancer-type: "Internal"
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: release-name-ingress-nginx-controller-internal
  namespace: ingress-nginx
spec:
  type: "LoadBalancer"
--
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: release-name-ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer

longwuyuan avatar Sep 25 '22 22:09 longwuyuan

I am a little confused now, you mentioned helm installation is not documeted deploy process for gke and in the command you sent me I see helm, on this link https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml (which is the one on kubectl apply -f ..) I just see 1 service, the one I get when I list the services inside the ingress-nginx (my last comment) namespace, do I have to follow helm installation using ingress-nginx template? that's the process I have already followed but I understood it was wrong from what you told me

or do I have to do a kubectl apply with this extract you mention?

thanks!

fbarrerafalabella avatar Sep 25 '22 23:09 fbarrerafalabella

You want internal and also external in one install of controller. So extract demonstrates how you can get one yaml manifest for both internal and external in the same one install of controller.

The yaml we publish and document for gke has only external service.

There are too many use cases so it's impossible to write docs for all use cases.

Most users, who want internal & external service, have either have 2 installs, one for internal and one for external. Or they use the information like the extract to do what they need.

Hope this helps.

longwuyuan avatar Sep 26 '22 02:09 longwuyuan

/retitle Internal service not working on GKE

longwuyuan avatar Sep 26 '22 03:09 longwuyuan

I created the ingress following your recommendations but still internal ip does not apply, here is my yaml file

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resourceNames:
  - ingress-controller-leader
  resources:
  - configmaps
  verbs:
  - get
  - update
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-controller-leader
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    networking.gke.io/load-balancer-type: "Internal"
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller-internal
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Local
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Local
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
  - appProtocol: https
    name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
        - --election-id=ingress-controller-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image: registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None

I also saw in the logs of my controller the following

"Ignoring ingress because of error while validating ingress class" ingress="sellercenter-channel-advisor-api-pr-999/sellercenter-channel-advisor-api-v4-f8bwhmqj" error="no object matching key \"nginx-internal\" in local store"

I updated with the values from the kubectl apply

fbarrerafalabella avatar Sep 27 '22 19:09 fbarrerafalabella

  • I can not study each and every object and resource in your yaml without having the context and details of the command and values file used to generate the yaml
  • The error message hints that you customized the install and did not use a default install but you have not posted the customization data (maybe for security reasons). So not possible to say precisely what happened
  • Its also likely that you are not doing a complete cleanup by removing all the resources and objects created by your previous failed install, before attempting the next install. Again can not say precisely because there is no data like kubectl get ingressclasses from before the install attempt
  • The yaml you have posted may have come from a helm template command but I am not certain while the error message could be after an attempt to create a ingress resource, again I am not certain
  • I think it will help if you ensure that all resources created from a previous failed attempt are cleaned up before attempting the next install
  • I also think that you should post all the small details like the command used to create the yaml and do the install including the values file
  • I also think that you should post the state of the cluster and the controller related resources using the kubectl get & kubectl describe commands
  • There was a bug in AKS(Azure) platform after some updates related to appProtocol. The health check path was misconfigured etc. Since multiple users are not reporting this on GKE there is not enough data to suspect the same problem here

Its not possible to provide support for such basic install process on github because there is no data hinting at a bug or a problem that needs to be fixed in the controller (and there is lack of resources for support). Its better that you discuss this in the K8S slack as there are more engineers and users there.

longwuyuan avatar Sep 27 '22 22:09 longwuyuan

here the answers

  • the yaml file is exactly the one on the documentation for GKE on this link https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke I just added this part as you suggested (literally coppied, paste on my code editor, and addet 1 service, there are no customizations and no hidden data as this is staging env)
apiVersion: v1
kind: Service
metadata:
  annotations:
    networking.gke.io/load-balancer-type: "Internal"
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller-internal
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Local
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer

since this command just created the external one

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
  • I installed exactly as I show in the description to avoid external problems and to be aligned with the documentation(also it was your first request because I was using pulumi, then helm and now just kubectl apply as it says in the documentation), nothing more, nothing less, there are no custom parameters apart from the fragment of code I have just commented
  • As I am running an old version 3.X.X there are no ingressclasses, but here is the result, I removed the ingress installed by kubectl apply -f ingress.yaml
kubectl get ingressclass --all-namespaces
No resources found
  • Resources from previuous installations of 4.X.X version were deleted
  • There are no other details, I am being as detailed as I can, the yaml file is the one I commented, I took the original one from here https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml which is the file that its being used on the oficial documentation, I just added 1 service to work as internal controller
  • The state of the cluster and the controller resources are already on the descripcion (I updated if with the latest values as I said on my previous comment) those are the values I get after installing ingress-nginx
  • I also dont think the problem from azure could be affecting gke

Do you know where I can find the link for the slack channel to share this?

fbarrerafalabella avatar Sep 27 '22 22:09 fbarrerafalabella

Do you have this state after latest install

MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found

longwuyuan avatar Sep 27 '22 22:09 longwuyuan

slack for K8S is at kubernetes.slack.com . Registration is slack.k8s.io . you can ask in the #gke channel

longwuyuan avatar Sep 27 '22 22:09 longwuyuan

Also, I think someone posted a message in another issue (can't recall which one) about the GCP firewall needing extra rules for allowing LB over internal network. You can double check your firewall rules if it allows the internal network traffic for required ports

longwuyuan avatar Sep 27 '22 22:09 longwuyuan

Yes, I havent noticed the MountVolume but that message is from the latest install, we have created the firewall rules because right now we are running 3.34.0 and with that version we have both internal and external controllers, i noticed the introduction of the ingressclass object was after 4.x.x versions because of the deprecation of the beta ingresses used on the 3.x.x versions

fbarrerafalabella avatar Sep 27 '22 22:09 fbarrerafalabella

ok, so that means the information you have posted can not be used to analyze the problem.

If you do the steps recommended to do a clean supported install and then edit your original post in this issue and update all the information, then maybe we can find some info that helps.

This does not seem like a difficult troubleshooting. But its getting extended for long long time because you are sending messages but you are not providing any useful relevant data from a fresh clean attempt to install, in a way that relates to the problem analysis. I suspect the controller pod or the cluster events or your GCP logs or some such tool will contain info on 2 aspects ; The current state and the immediate reason causing the current state.

longwuyuan avatar Sep 28 '22 00:09 longwuyuan

/kind support

longwuyuan avatar Sep 28 '22 15:09 longwuyuan

Hmm even though I performed all steps described in the documentation seems like you cannot give any support, thanks for the help I will open another issue if needed with a clean install on a brand new cluster with a brand new file

fbarrerafalabella avatar Sep 28 '22 20:09 fbarrerafalabella

I think you did not perform the steps like this ;

  • Uninstall controller by using same values file or same manifest that you use to install
  • Check manually if there is any resource created by any previous install and lingering
  • Install the controller as per docs linked
  • Install a second controller as per this https://kubernetes.github.io/ingress-nginx/#how-to-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster. But this second controller you change the manifest to configure internal controller
  • Capture the data like below example commands
    • Before installing --> kubectl get all,ing -A -o wide
    • Copy/paste the exact command and output of installing the internal controller
    • kubectl -n ingress-nginx get all -o wide
    • kubectl -n ingress-nginx describe po
    • kubectl -n ingress-nginx describe svc
    • kubectl -n ingress-nginx get events -o wide
  • Capture some GCP logs related to creation of LB serrvice

I am providing you support as a volunteer and this is open-source so I am sorry if I can not meet all your expectations. Its not easy to watch and take care of each and every message from you, like a paid support. It works best when like a community we help each other with data to analyze and comment on. Apologies for not being able to solve your problem. Hope someone else solves it for you.

You need not close the issue. You can wait and see if other engineers can help you on this issue. But there may be other people using GKE internal LB on the kubernettes.slack.com forum so you may get helpful info/tips/advise from them in the gke channel or the ingress-nginx-users channel in slack . There are also more engineers and developers there in slack to watch a problem description.

longwuyuan avatar Sep 28 '22 23:09 longwuyuan