authservice icon indicating copy to clipboard operation
authservice copied to clipboard

authservice pod never reaches ready state

Open abstractalchemist opened this issue 3 years ago • 3 comments

I've been trying to get the bookinfo example to work on a newly created kubernetes install. After running the helm template command as indicated, the authservice container in the authservice pod continously fails the readiness check.

ssm-user@ip-172-31-42-1:~/authservice/bookinfo-example$ kubectl describe po -l app=authservice
Name:         authservice-6575f765c-48rr7
Namespace:    default
Priority:     0
Node:         ip-172-31-42-1/172.31.42.1
Start Time:   Thu, 25 Nov 2021 23:47:22 +0000
Labels:       app=authservice
              pod-template-hash=6575f765c
              security.istio.io/tlsMode=istio
              service.istio.io/canonical-name=authservice
              service.istio.io/canonical-revision=latest
Annotations:  cni.projectcalico.org/podIP: 10.1.194.81/32
              cni.projectcalico.org/podIPs: 10.1.194.81/32
              kubectl.kubernetes.io/default-container: authservice
              kubectl.kubernetes.io/default-logs-container: authservice
              kubectl.kubernetes.io/restartedAt: 2021-11-25T23:42:38Z
              prometheus.io/path: /stats/prometheus
              prometheus.io/port: 15020
              prometheus.io/scrape: true
              sidecar.istio.io/status:
                {"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","istiod-...
Status:       Running
IP:           10.1.194.81
IPs:
  IP:           10.1.194.81
Controlled By:  ReplicaSet/authservice-6575f765c
Init Containers:
  istio-init:
    Container ID:  containerd://53752e0bb2b94d2f2e7d9db4533bcc3fdbed493c3b0163717272ebb38eccddc7
    Image:         docker.io/istio/proxyv2:1.12.0
    Image ID:      docker.io/istio/proxyv2@sha256:6734c59bab78320fcb2f38cc5da0d6c8a40e484a8eaac5fa6709fe1e4ddec25e
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      *
      -d
      15090,15021,15020
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 25 Nov 2021 23:47:23 +0000
      Finished:     Thu, 25 Nov 2021 23:47:23 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:        10m
      memory:     40Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d6j76 (ro)
Containers:
  authservice:
    Container ID:   containerd://f8f975c225dd6d5fe19b2fd2aefdbfb67c48ce98a3e7d1b34d1f388e1e22dad9
    Image:          ghcr.io/istio-ecosystem/authservice/authservice:0.4.1
    Image ID:       ghcr.io/istio-ecosystem/authservice/authservice@sha256:6acde8074d7962d3a26e6f9da77911b5725496b8cc790201f511a47fa05127ce
    Port:           10003/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 25 Nov 2021 23:47:24 +0000
    Ready:          False
    Restart Count:  0
    Readiness:      http-get http://:15020/app-health/authservice/readyz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /etc/authservice from authservice-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d6j76 (ro)
  istio-proxy:
    Container ID:  containerd://f86dfbfd4d92ea6520454e7c73688afd5e41e3c9a5560f8015c6c52a2bb3c66c
    Image:         docker.io/istio/proxyv2:1.12.0
    Image ID:      docker.io/istio/proxyv2@sha256:6734c59bab78320fcb2f38cc5da0d6c8a40e484a8eaac5fa6709fe1e4ddec25e
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency
      2
    State:          Running
      Started:      Thu, 25 Nov 2021 23:47:24 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      10m
      memory:   40Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      authservice-6575f765c-48rr7 (v1:metadata.name)
      POD_NAMESPACE:                 default (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      PROXY_CONFIG:                  {}

      ISTIO_META_POD_PORTS:          [
                                         {"containerPort":10003,"protocol":"TCP"}
                                     ]
      ISTIO_META_APP_CONTAINERS:     authservice
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      authservice
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/default/deployments/authservice
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
      ISTIO_KUBE_APP_PROBERS:        {"/app-health/authservice/readyz":{"httpGet":{"path":"/healthz","port":10004,"scheme":"HTTP"},"timeoutSeconds":1}}
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d6j76 (ro)
      /var/run/secrets/tokens from istio-token (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  authservice-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      authservice
    Optional:  false
  kube-api-access-d6j76:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  37s               default-scheduler  Successfully assigned default/authservice-6575f765c-48rr7 to ip-172-31-42-1
  Normal   Pulled     38s               kubelet            Container image "docker.io/istio/proxyv2:1.12.0" already present on machine
  Normal   Created    38s               kubelet            Created container istio-init
  Normal   Started    37s               kubelet            Started container istio-init
  Normal   Pulling    36s               kubelet            Pulling image "ghcr.io/istio-ecosystem/authservice/authservice:0.4.1"
  Normal   Pulled     36s               kubelet            Successfully pulled image "ghcr.io/istio-ecosystem/authservice/authservice:0.4.1" in 466.91771ms
  Normal   Created    36s               kubelet            Created container authservice
  Normal   Started    36s               kubelet            Started container authservice
  Normal   Pulled     36s               kubelet            Container image "docker.io/istio/proxyv2:1.12.0" already present on machine
  Normal   Created    36s               kubelet            Created container istio-proxy
  Normal   Started    36s               kubelet            Started container istio-proxy
  Warning  Unhealthy  8s (x5 over 34s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500

Kubernetes 1.22 Istio 1.12 Operating System: Ubuntu 20 Using microk8s for the kubernetes implementation with metallb, dns enabled

abstractalchemist avatar Nov 25 '21 23:11 abstractalchemist

It's your istio-proxy container not ready, rather than authservice. Probably check your istio deployments, such as istio-proxy container log, istiod is down or not?

incfly avatar Nov 26 '21 21:11 incfly

On another system that has the exact same issue, i'm seeing this in the istio-proxy logs:

2021-11-26T23:04:58.744886Z	info	ads	SDS: PUSH request for node:authservice-7dcdc4f6c9-ml9qr.default resources:1 size:1.1kB resource:ROOTCA
2021-11-26T23:04:58.745000Z	info	cache	returned workload certificate from cache	ttl=23h59m59.255005648s
2021-11-26T23:04:58.745128Z	info	ads	SDS: PUSH request for node:authservice-7dcdc4f6c9-ml9qr.default resources:1 size:4.0kB resource:default
2021-11-26T23:04:58.745218Z	info	cache	returned workload trust anchor from cache	ttl=23h59m59.254786775s
2021-11-26T23:04:58.745254Z	info	ads	SDS: PUSH for node:authservice-7dcdc4f6c9-ml9qr.default resources:1 size:1.1kB resource:ROOTCA
2021-11-26T23:04:59.531992Z	info	Readiness succeeded in 1.208574728s
2021-11-26T23:04:59.532315Z	info	Envoy proxy is ready
2021-11-26T23:05:05.530987Z	error	Request to probe app failed: Get "http://192.168.92.233:10004/healthz": dial tcp 127.0.0.6:0->192.168.92.233:10004: connect: connection refused, original URL path = /app-health/authservice/readyz
app URL path = /healthz
^C
[jmhirata@localhost ~]$ kubectl logs -f $( kubectl get po -l app=authservice -o jsonpath='{.items[0].metadata.name}' ) -c authservice
D1126 23:04:58.260309132       1 ev_posix.cc:173]            Using polling engine: epollex
D1126 23:04:58.260437057       1 lb_policy_registry.cc:42]   registering LB policy factory for "grpclb"
D1126 23:04:58.260451144       1 lb_policy_registry.cc:42]   registering LB policy factory for "priority_experimental"
D1126 23:04:58.260459234       1 lb_policy_registry.cc:42]   registering LB policy factory for "weighted_target_experimental"
D1126 23:04:58.260467447       1 lb_policy_registry.cc:42]   registering LB policy factory for "pick_first"
D1126 23:04:58.260474938       1 lb_policy_registry.cc:42]   registering LB policy factory for "round_robin"
D1126 23:04:58.260483720       1 dns_resolver_ares.cc:507]   Using ares dns resolver
D1126 23:04:58.260514655       1 lb_policy_registry.cc:42]   registering LB policy factory for "cds_experimental"
D1126 23:04:58.260524715       1 lb_policy_registry.cc:42]   registering LB policy factory for "eds_experimental"
D1126 23:04:58.260534757       1 lb_policy_registry.cc:42]   registering LB policy factory for "xds_cluster_impl_experimental"
D1126 23:04:58.260545204       1 lb_policy_registry.cc:42]   registering LB policy factory for "xds_cluster_manager_experimental"
[2021-11-26 23:04:58.263] [console] [info] Run: Server listening on 0.0.0.0:10003
[2021-11-26 23:04:58.263] [console] [trace] Creating V3 request processor state
[2021-11-26 23:04:58.263] [console] [warning] Creating V2 request processor state. V2 will be deprecated in 2021 Q1
I1126 23:04:58.262868308       1 socket_utils_common_posix.cc:427] Disabling AF_INET6 sockets because ::1 is not available.
I1126 23:04:58.262931479       1 socket_utils_common_posix.cc:352] TCP_USER_TIMEOUT is available. TCP_USER_TIMEOUT will be used thereafter
I1126 23:04:58.263031952       1 tcp_server_posix.cc:325]    Failed to add :: listener, the environment may not support IPv6: {"created":"@1637967898.262904539","description":"Address family not supported by protocol","errno":97,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/socket_utils_common_posix.cc","file_line":443,"os_error":"Address family not supported by protocol","syscall":"socket","target_address":"[::]:10003"}

NAME                              READY   STATUS    RESTARTS   AGE    IP               NODE                                           NOMINATED NODE   READINESS GATES
authservice-7dcdc4f6c9-ml9qr      2/3     Running   0          29s    192.168.92.233   ip-192-168-88-142.us-west-1.compute.internal   <none>           <none>

and istiod is up

grafana-6ccd56f4b6-gph9r               1/1     Running   0          6d19h
istio-egressgateway-7f4864f59c-wrqnm   1/1     Running   0          6d19h
istio-ingressgateway-55d9fb9f-m96rs    1/1     Running   0          6d19h
istiod-55d5649869-4mx5n                1/1     Running   0          24h
jaeger-5d44bc5c5d-d8ncx                1/1     Running   0          6d19h
kiali-79b86ff5bc-9ldzs                 1/1     Running   0          6d19h
prometheus-64fd8ccd65-rn8pz            2/2     Running   0          6d19h

abstractalchemist avatar Nov 26 '21 23:11 abstractalchemist

2021-11-26T23:05:05.530987Z error Request to probe app failed: Get "http://192.168.92.233:10004/healthz": dial tcp 127.0.0.6:0->192.168.92.233:10004: connect: connection refused, original URL path = /app-health/authservice/readyz

It seems that healthcheck requests were failed. This is why authservice pod wasn't ready.

Shikugawa avatar Nov 29 '21 02:11 Shikugawa