linkerd2
linkerd2 copied to clipboard
Linkerd causing Nginx defaultbackend to return 500/502 on readiness probes
What is the issue?
When you install Nginx ingress controller with defaultbackend enabled, and you inject linkerd-proxy, defaultbackend component continues to fail readiness/liveness probes because linkerd-proxy keeps throwing 500/502s with following errors:
[ 1849.685244s] INFO ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.244.3.1:32860}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 1849.685287s] INFO ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.244.3.1:32860}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 1849.685296s] WARN ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.244.3.1:32860}: linkerd_app_inbound::http::server: Unexpected error error=connection closed before message completed
[ 2259.684985s] INFO ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.244.3.1:34498}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 2259.685026s] INFO ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.244.3.1:34498}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 2259.685034s] WARN ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.244.3.1:34498}: linkerd_app_inbound::http::server: Unexpected error error=connection closed before message completed
When I enabled debug logs, this is one of logs block for one such error:
[296729.838928s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}: linkerd_app_inbound::accept: Accepted policy=AllowPolicy { dst: OrigDstAddr(10.244.0.5:8080), server: Receiver { shared: Shared { value: RwLock(PhantomData, RwLock {
data: ServerPolicy { protocol: Detect { timeout: 10s }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, name: "default:all-unauthenticated" }, Authorization {
networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, name: "default:localhost" }], name: "default:all-unauthenticated" } }), state: AtomicState(2), ref_count_rx: 12, notify_rx: Notify { state: 4,
waiters: Mutex(PhantomData, Mutex { data: LinkedList { head: None, tail: None } }) }, notify_tx: Notify { state: 1, waiters: Mutex(PhantomData, Mutex { data: LinkedList { head: Some(0x7f68316b8d78), tail: Some(0x7f68316b8d78) } }) } }, version: Version(0) } }
[296729.838971s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}: linkerd_tls::server: Peeked bytes from TCP stream sz=109
[296729.838986s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}: linkerd_detect: DetectResult protocol=Some(Http1) elapsed=5.3┬╡s
[296729.838993s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}: linkerd_proxy_http::server: Creating HTTP service version=Http1
[296729.839008s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}: linkerd_proxy_http::server: Handling as HTTP version=Http1
[296729.839027s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}: hyper::proto::h1::io: parsed 4 headers
[296729.839031s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}: hyper::proto::h1::conn: incoming body is empty
[296729.839041s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}:http{v=1.x}: linkerd_app_inbound::policy::authorize::http: Request authorized permit=Permit { dst: OrigDstAddr(10.244.0.5:8080), protocol: Detect { timeout: 10s },
labels: AuthzLabels { server: ServerLabel("default:all-unauthenticated"), authz: "default:all-unauthenticated" } } tls=None(NoClientHello) client=10.244.0.1:43986
[296729.839075s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}:http{v=1.x}:http1: tower::buffer::worker: service.ready=true processing request
[296729.839085s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}:http{v=1.x}:http1:profile:http1: linkerd_proxy_http::client: method=GET uri=http://10.244.0.5:8080/healthz version=HTTP/1.1
[296729.839091s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}:http{v=1.x}:http1:profile:http1: linkerd_proxy_http::client: headers={"host": "10.244.0.5:8080", "user-agent": "kube-probe/1.22", "accept": "*/*"}
[296729.839115s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}:http{v=1.x}:http1:profile:http1: hyper::client::pool: reuse idle connection for ("http", 10.244.0.5:8080)
[296729.839163s] DEBUG ThreadId(01) hyper::proto::h1::io: flushed 90 bytes
[296729.839178s] DEBUG ThreadId(01) hyper::proto::h1::conn: parse error (connection closed before message completed) with 0 bytes
[296729.839183s] DEBUG ThreadId(01) hyper::proto::h1::dispatch: read_head error: connection closed before message completed
[296729.839200s] DEBUG ThreadId(01) hyper::proto::h1::conn: error shutting down IO: client: Transport endpoint is not connected (os error 107)
[296729.839215s] DEBUG ThreadId(01) hyper::client::client: client connection error: error shutting down connection: client: Transport endpoint is not connected (os error 107)
[296729.839236s] INFO ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}:http{v=1.x}:http1:profile:rescue{client.addr=10.244.0.1:43986}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[296729.839259s] INFO ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}:http{v=1.x}:rescue{client.addr=10.244.0.1:43986}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[296729.839265s] WARN ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}:http{v=1.x}:rescue{client.addr=10.244.0.1:43986}: linkerd_app_inbound::http::server: Unexpected error error=connection closed before message completed
[296729.839272s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}:http{v=1.x}: linkerd_app_core::errors::respond: Handling error on HTTP connection status=500 Internal Server Error version=HTTP/1.1 close=true
[296729.839318s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}: hyper::proto::h1::io: flushed 113 bytes
[296729.839339s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}:server{port=8080}: linkerd_proxy_http::server: The client is shutting down the connection res=Ok(())
[296729.839365s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.1:43986}: linkerd_app_core::serve: Connection closed
[296729.839420s] DEBUG ThreadId(01) hyper::proto::h1::io: parsed 3 headers
[296729.839425s] DEBUG ThreadId(01) hyper::proto::h1::conn: incoming body is content-length (2 bytes)
[296729.839433s] DEBUG ThreadId(01) hyper::proto::h1::conn: incoming body completed
How can it be reproduced?
- Install linkerd in a vanilla k8s cluster (I am using a new AKS cluster for reprod):
- Install nginx-ingress with defaultbackend enabled using helm:
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace --set defaultBackend.enabled=true --set-string defaultBackend.podAnnotations."linkerd\.io/inject"=enabled
- defaultbackend pods will start failing readiness/liveness probes because linkerd-proxy fails intermittently
Logs, error output, etc
Pasted logs above
output of linkerd check -o short
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ control plane pods are ready
√ cluster networks can be verified
√ cluster networks contains all node podCIDRs
linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ proxy-init container runs as root user if docker container runtime is used
linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls
-------------------------------
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days
linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date
control-plane-version
---------------------
√ can retrieve the control plane version
√ control plane is up-to-date
√ control plane and cli versions match
linkerd-control-plane-proxy
---------------------------
√ control plane proxies are healthy
√ control plane proxies are up-to-date
√ control plane proxies and cli versions match
Status check results are √
Environment
- Kubernetes Version 1.23.4
- Cluster Environment: AKS
- Linkerd version stable-2.11.4
- OS: Ubuntu 18.04
- Reproduced on AKS, but happens on aks-engine clusters as well
Possible solution
No response
Additional context
Linkerd version
Client version: stable-2.11.4
Server version: stable-2.11.4
Would you like to work on fixing this bug?
no
error shutting down IO: client: Transport endpoint is not connected (os error 107)
The proxy is getting a socket error. When the proxy is attempting to read or write from a socket, the OS is indicating that the socket has been lost, effectively.
This may be caused by a node-level configuration problem--namely conntrack. https://linkerd.io/2.11/tasks/debugging-502s includes some hints about how to start debugging this sort of thing.
When you see this problem, is there load going through nginx, or is it effectively idle? What about the node in general--is nginx the only pod on the node or is it busy with other network-intensive workloads?
@olix0r the cluster is completely idle and receiving ZERO traffic, and the errors are coming from regular health probe checks from kubernetes. Hence, I don't think this is related to conntrack. Would you be able to help us if I give you a kubeconfig offline for a reprod cluster that I have created in AKS?
@adleong Just curious, were you able to get a reprod from your side and found the root cause?
@ericsuhong we haven't had a chance to look into this yet.
I can repro something similar locally (in k3d) with a recent edge:
:; linkerd version
Client version: edge-22.8.3
Server version: edge-22.8.3
:; k logs -n ingress-nginx ingress-nginx-defaultbackend-78dd86b9f5-86pqr
Defaulted container "linkerd-proxy" out of: linkerd-proxy, ingress-nginx-default-backend, linkerd-init (init)
[ 0.002908s] INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
[ 0.003634s] INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
[ 0.003650s] INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
[ 0.003653s] INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
[ 0.003656s] INFO ThreadId(01) linkerd2_proxy: Tap DISABLED
[ 0.003658s] INFO ThreadId(01) linkerd2_proxy: Local identity is ingress-nginx-backend.ingress-nginx.serviceaccount.identity.linkerd.cluster.local
[ 0.003661s] INFO ThreadId(01) linkerd2_proxy: Identity verified via linkerd-identity-headless.linkerd.svc.cluster.local:8080 (linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local)
[ 0.003664s] INFO ThreadId(01) linkerd2_proxy: Destinations resolved via linkerd-dst-headless.linkerd.svc.cluster.local:8086 (linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local)
[ 0.016380s] INFO ThreadId(02) daemon:identity: linkerd_app: Certified identity id=ingress-nginx-backend.ingress-nginx.serviceaccount.identity.linkerd.cluster.local
[ 68.610790s] INFO ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.42.0.1:56680}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 68.610859s] INFO ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.42.0.1:56680}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 68.610878s] WARN ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.42.0.1:56680}: linkerd_app_inbound::http::server: Unexpected error error=connection closed before message completed
[ 108.610150s] INFO ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.42.0.1:40736}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 108.610206s] INFO ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.42.0.1:40736}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 108.610219s] WARN ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.42.0.1:40736}: linkerd_app_inbound::http::server: Unexpected error error=connection closed before message completed
:; k get ev -n ingress-nginx | grep probe
2m23s Warning Unhealthy pod/ingress-nginx-defaultbackend-78dd86b9f5-86pqr Readiness probe failed: HTTP probe failed with statuscode: 500
83s Warning Unhealthy pod/ingress-nginx-defaultbackend-78dd86b9f5-86pqr Liveness probe failed: HTTP probe failed with statuscode: 500
33s Warning Unhealthy pod/ingress-nginx-defaultbackend-78dd86b9f5-86pqr Liveness probe failed: HTTP probe failed with statuscode: 502
:; kubectl sniff -n ingress-nginx ingress-nginx-defaultbackend-78dd86b9f5-86pqr -o nginx-backend.pcap --privileged --socket /run/k3s/containerd/containerd.sock
Unfortunately this misses proxy<->backend traffic and only covers kubelet<->proxy traffic :(
We see two connections firing probe requests ~simultaneously, with only one failing:
[ 138.917264s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: linkerd_app_inbound::accept: Accepted policy=ServerPolicy { protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Prefix("/")), headers: [], query_params: [], method: None }], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }, Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Exact("/healthz")), headers: [], query_params: [], method: Some(GET) }], policy: RoutePolicy { meta: Default { name: "probe" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "probe" } }, Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }] }, meta: Resource { group: "", kind: "default", name: "all-unauthenticated" } }
[ 138.917493s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: linkerd_app_inbound::accept: Accepted policy=ServerPolicy { protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Prefix("/")), headers: [], query_params: [], method: None }], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }, Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Exact("/healthz")), headers: [], query_params: [], method: Some(GET) }], policy: RoutePolicy { meta: Default { name: "probe" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "probe" } }, Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }] }, meta: Resource { group: "", kind: "default", name: "all-unauthenticated" } }
[ 138.917672s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: linkerd_tls::server: Peeked bytes from TCP stream sz=0
[ 138.917689s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: linkerd_tls::server: Attempting to buffer TLS ClientHello after incomplete peek
[ 138.917701s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: linkerd_tls::server: Reading bytes from TCP stream buf.capacity=8192
[ 138.917720s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: linkerd_tls::server: Read bytes from TCP stream buf.len=109
[ 138.917758s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: linkerd_detect: DetectResult protocol=Some(Http1) elapsed=10.667µs
[ 138.917783s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: linkerd_proxy_http::server: Creating HTTP service version=Http1
[ 138.917838s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: linkerd_proxy_http::server: Handling as HTTP version=Http1
[ 138.917891s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::conn: Conn::read_head
[ 138.917911s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::io: received 109 bytes
[ 138.917925s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:parse_headers: hyper::proto::h1::role: Request.parse bytes=109
[ 138.917939s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:parse_headers: hyper::proto::h1::role: Request.parse Complete(109)
[ 138.917967s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::io: parsed 4 headers
[ 138.917978s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::conn: incoming body is empty
[ 138.917984s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::conn: remote disabling keep-alive
[ 138.918047s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:http{v=1.x}: linkerd_app_inbound::policy::http: Request authorized server.group= server.kind=default server.name=all-unauthenticated route.group= route.kind=default route.name=probe authz.group= authz.kind=default authz.name=probe client.tls=None(NoClientHello) client.ip=10.42.0.1
[ 138.918155s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Disabled }
[ 138.918182s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: linkerd_tls::server: Peeked bytes from TCP stream sz=0
[ 138.918191s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: linkerd_tls::server: Attempting to buffer TLS ClientHello after incomplete peek
[ 138.918198s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: linkerd_tls::server: Reading bytes from TCP stream buf.capacity=8192
[ 138.918213s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: linkerd_tls::server: Read bytes from TCP stream buf.len=109
[ 138.918239s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: linkerd_detect: DetectResult protocol=Some(Http1) elapsed=5.959µs
[ 138.918262s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: linkerd_proxy_http::server: Creating HTTP service version=Http1
[ 138.918288s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: linkerd_proxy_http::server: Handling as HTTP version=Http1
[ 138.918316s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::conn: Conn::read_head
[ 138.918326s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::io: received 109 bytes
[ 138.918335s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:parse_headers: hyper::proto::h1::role: Request.parse bytes=109
[ 138.918347s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:parse_headers: hyper::proto::h1::role: Request.parse Complete(109)
[ 138.918364s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::io: parsed 4 headers
[ 138.918378s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::conn: incoming body is empty
[ 138.918385s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::conn: remote disabling keep-alive
[ 138.918456s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:http{v=1.x}: linkerd_app_inbound::policy::http: Request authorized server.group= server.kind=default server.name=all-unauthenticated route.group= route.kind=default route.name=probe authz.group= authz.kind=default authz.name=probe client.tls=None(NoClientHello) client.ip=10.42.0.1
[ 138.918548s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Disabled }
[ 138.918603s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57960 client.id="-" timestamp=2022-09-07T18:25:00.859010691Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: linkerd_proxy_http::client: method=GET uri=http://10.42.0.96:8080/healthz version=HTTP/1.1
[ 138.918622s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57960 client.id="-" timestamp=2022-09-07T18:25:00.859010691Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: linkerd_proxy_http::client: headers={"host": "10.42.0.96:8080", "user-agent": "kube-probe/1.23", "accept": "*/*"}
[ 138.918670s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57976 client.id="-" timestamp=2022-09-07T18:25:00.859407585Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: linkerd_proxy_http::client: method=GET uri=http://10.42.0.96:8080/healthz version=HTTP/1.1
[ 138.918685s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57976 client.id="-" timestamp=2022-09-07T18:25:00.859407585Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: linkerd_proxy_http::client: headers={"host": "10.42.0.96:8080", "user-agent": "kube-probe/1.23", "accept": "*/*"}
[ 138.918740s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57960 client.id="-" timestamp=2022-09-07T18:25:00.859010691Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: take? ("http", 10.42.0.96:8080): expiration = Some(20s)
[ 138.918762s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57960 client.id="-" timestamp=2022-09-07T18:25:00.859010691Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: reuse idle connection for ("http", 10.42.0.96:8080)
[ 138.918792s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Disabled }
[ 138.918826s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57976 client.id="-" timestamp=2022-09-07T18:25:00.859407585Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: take? ("http", 10.42.0.96:8080): expiration = Some(20s)
[ 138.918841s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57976 client.id="-" timestamp=2022-09-07T18:25:00.859407585Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: reuse idle connection for ("http", 10.42.0.96:8080)
[ 138.918861s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Disabled }
[ 138.918893s] TRACE ThreadId(01) encode_headers: hyper::proto::h1::role: Client::encode method=GET, body=None
[ 138.918980s] DEBUG ThreadId(01) hyper::proto::h1::io: flushed 90 bytes
[ 138.918993s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
[ 138.919014s] TRACE ThreadId(01) encode_headers: hyper::proto::h1::role: Client::encode method=GET, body=None
[ 138.919138s] DEBUG ThreadId(01) hyper::proto::h1::io: flushed 90 bytes
[ 138.919156s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
[ 138.919178s] TRACE ThreadId(01) hyper::proto::h1::conn: Conn::read_head
[ 138.919193s] TRACE ThreadId(01) hyper::proto::h1::io: received 0 bytes
[ 138.919200s] TRACE ThreadId(01) hyper::proto::h1::io: parse eof
[ 138.919207s] TRACE ThreadId(01) hyper::proto::h1::conn: State::close_read()
[ 138.919216s] DEBUG ThreadId(01) hyper::proto::h1::conn: parse error (connection closed before message completed) with 0 bytes
[ 138.919224s] DEBUG ThreadId(01) hyper::proto::h1::dispatch: read_head error: connection closed before message completed
[ 138.919233s] TRACE ThreadId(01) hyper::proto::h1::conn: State::close_read()
[ 138.919239s] TRACE ThreadId(01) hyper::proto::h1::conn: State::close_write()
[ 138.919246s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
[ 138.919302s] DEBUG ThreadId(01) hyper::proto::h1::conn: error shutting down IO: client: Transport endpoint is not connected (os error 107)
[ 138.919363s] DEBUG ThreadId(01) hyper::client::client: client connection error: error shutting down connection: client: Transport endpoint is not connected (os error 107)
[ 138.919453s] INFO ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57976 client.id="-" timestamp=2022-09-07T18:25:00.859407585Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:rescue{client.addr=10.42.0.1:57976}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 138.919526s] INFO ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57976 client.id="-" timestamp=2022-09-07T18:25:00.859407585Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:rescue{client.addr=10.42.0.1:57976}: linkerd_app_core::errors::respond: Request failed error=connection closed before message completed
[ 138.919547s] WARN ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57976 client.id="-" timestamp=2022-09-07T18:25:00.859407585Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:rescue{client.addr=10.42.0.1:57976}: linkerd_app_inbound::http::server: Unexpected error error=connection closed before message completed
[ 138.919560s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57976 client.id="-" timestamp=2022-09-07T18:25:00.859407585Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}: linkerd_app_core::errors::respond: Handling error on HTTP connection status=500 Internal Server Error version=HTTP/1.1 close=true
[ 138.919596s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}:encode_headers: hyper::proto::h1::role: Server::encode status=500, body=None, req_method=Some(GET)
[ 138.919737s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::io: flushed 113 bytes
[ 138.919757s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::conn: State::close()
[ 138.919767s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
[ 138.919829s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: hyper::proto::h1::conn: shut down IO complete
[ 138.919849s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}:server{port=8080}: linkerd_proxy_http::server: The client is shutting down the connection res=Ok(())
[ 138.919929s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57976}: linkerd_app_core::serve: Connection closed
[ 138.919985s] TRACE ThreadId(01) hyper::proto::h1::conn: Conn::read_head
[ 138.920006s] TRACE ThreadId(01) hyper::proto::h1::io: received 118 bytes
[ 138.920017s] TRACE ThreadId(01) parse_headers: hyper::proto::h1::role: Response.parse bytes=118
[ 138.920032s] TRACE ThreadId(01) parse_headers: hyper::proto::h1::role: Response.parse Complete(116)
[ 138.920054s] DEBUG ThreadId(01) hyper::proto::h1::io: parsed 3 headers
[ 138.920061s] DEBUG ThreadId(01) hyper::proto::h1::conn: incoming body is content-length (2 bytes)
[ 138.920076s] TRACE ThreadId(01) hyper::proto::h1::decode: decode; state=Length(2)
[ 138.920085s] DEBUG ThreadId(01) hyper::proto::h1::conn: incoming body completed
[ 138.920099s] TRACE ThreadId(01) hyper::proto::h1::conn: maybe_notify; read_from_io blocked
[ 138.920115s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[ 138.920139s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[ 138.920176s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57960 client.id="-" timestamp=2022-09-07T18:25:00.859010691Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: put; add idle connection for ("http", 10.42.0.96:8080)
[ 138.920198s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:57960 client.id="-" timestamp=2022-09-07T18:25:00.859010691Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: pooling idle connection for ("http", 10.42.0.96:8080)
[ 138.920269s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}:encode_headers: hyper::proto::h1::role: Server::encode status=200, body=Some(Known(2)), req_method=Some(GET)
[ 138.920304s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::encode: sized write, len = 2
[ 138.920316s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::io: buffer.queue self.len=116 buf.len=2
[ 138.920490s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::io: flushed 118 bytes
[ 138.920526s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::conn: State::close()
[ 138.920540s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
[ 138.920610s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: hyper::proto::h1::conn: shut down IO complete
[ 138.920640s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}:server{port=8080}: linkerd_proxy_http::server: The client is shutting down the connection res=Ok(())
[ 138.920745s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:57960}: linkerd_app_core::serve: Connection closed
[ 138.920785s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
This isn't a problem in and of itself, as we see this work just fine during the prior probe:
[ 128.916911s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: linkerd_app_inbound::accept: Accepted policy=ServerPolicy { protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Prefix("/")), headers: [], query_params: [], method: None }], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }, Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Exact("/healthz")), headers: [], query_params: [], method: Some(GET) }], policy: RoutePolicy { meta: Default { name: "probe" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "probe" } }, Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }] }, meta: Resource { group: "", kind: "default", name: "all-unauthenticated" } }
[ 128.916983s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: linkerd_app_inbound::accept: Accepted policy=ServerPolicy { protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Prefix("/")), headers: [], query_params: [], method: None }], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }, Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Exact("/healthz")), headers: [], query_params: [], method: Some(GET) }], policy: RoutePolicy { meta: Default { name: "probe" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "probe" } }, Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }] }, meta: Resource { group: "", kind: "default", name: "all-unauthenticated" } }
[ 128.917045s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: linkerd_tls::server: Peeked bytes from TCP stream sz=0
[ 128.917049s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: linkerd_tls::server: Attempting to buffer TLS ClientHello after incomplete peek
[ 128.917053s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: linkerd_tls::server: Reading bytes from TCP stream buf.capacity=8192
[ 128.917058s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: linkerd_tls::server: Read bytes from TCP stream buf.len=109
[ 128.917070s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: linkerd_detect: DetectResult protocol=Some(Http1) elapsed=3.597µs
[ 128.917079s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: linkerd_proxy_http::server: Creating HTTP service version=Http1
[ 128.917094s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: linkerd_proxy_http::server: Handling as HTTP version=Http1
[ 128.917108s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::conn: Conn::read_head
[ 128.917113s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::io: received 109 bytes
[ 128.917117s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:parse_headers: hyper::proto::h1::role: Request.parse bytes=109
[ 128.917121s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:parse_headers: hyper::proto::h1::role: Request.parse Complete(109)
[ 128.917129s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::io: parsed 4 headers
[ 128.917133s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::conn: incoming body is empty
[ 128.917135s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::conn: remote disabling keep-alive
[ 128.917161s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:http{v=1.x}: linkerd_app_inbound::policy::http: Request authorized server.group= server.kind=default server.name=all-unauthenticated route.group= route.kind=default route.name=probe authz.group= authz.kind=default authz.name=probe client.tls=None(NoClientHello) client.ip=10.42.0.1
[ 128.917208s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Disabled }
[ 128.917223s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: linkerd_tls::server: Peeked bytes from TCP stream sz=0
[ 128.917227s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: linkerd_tls::server: Attempting to buffer TLS ClientHello after incomplete peek
[ 128.917230s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: linkerd_tls::server: Reading bytes from TCP stream buf.capacity=8192
[ 128.917235s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: linkerd_tls::server: Read bytes from TCP stream buf.len=109
[ 128.917247s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: linkerd_detect: DetectResult protocol=Some(Http1) elapsed=2.438µs
[ 128.917254s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: linkerd_proxy_http::server: Creating HTTP service version=Http1
[ 128.917266s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: linkerd_proxy_http::server: Handling as HTTP version=Http1
[ 128.917276s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::conn: Conn::read_head
[ 128.917281s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::io: received 109 bytes
[ 128.917284s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:parse_headers: hyper::proto::h1::role: Request.parse bytes=109
[ 128.917288s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:parse_headers: hyper::proto::h1::role: Request.parse Complete(109)
[ 128.917296s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::io: parsed 4 headers
[ 128.917301s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::conn: incoming body is empty
[ 128.917303s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::conn: remote disabling keep-alive
[ 128.917324s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:http{v=1.x}: linkerd_app_inbound::policy::http: Request authorized server.group= server.kind=default server.name=all-unauthenticated route.group= route.kind=default route.name=probe authz.group= authz.kind=default authz.name=probe client.tls=None(NoClientHello) client.ip=10.42.0.1
[ 128.917358s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Disabled }
[ 128.917383s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42512 client.id="-" timestamp=2022-09-07T18:24:50.858150690Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: linkerd_proxy_http::client: method=GET uri=http://10.42.0.96:8080/healthz version=HTTP/1.1
[ 128.917392s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42512 client.id="-" timestamp=2022-09-07T18:24:50.858150690Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: linkerd_proxy_http::client: headers={"host": "10.42.0.96:8080", "user-agent": "kube-probe/1.23", "accept": "*/*"}
[ 128.917415s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42522 client.id="-" timestamp=2022-09-07T18:24:50.858318639Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: linkerd_proxy_http::client: method=GET uri=http://10.42.0.96:8080/healthz version=HTTP/1.1
[ 128.917446s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42522 client.id="-" timestamp=2022-09-07T18:24:50.858318639Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: linkerd_proxy_http::client: headers={"host": "10.42.0.96:8080", "user-agent": "kube-probe/1.23", "accept": "*/*"}
[ 128.917481s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42512 client.id="-" timestamp=2022-09-07T18:24:50.858150690Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: take? ("http", 10.42.0.96:8080): expiration = Some(20s)
[ 128.917491s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42512 client.id="-" timestamp=2022-09-07T18:24:50.858150690Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: reuse idle connection for ("http", 10.42.0.96:8080)
[ 128.917510s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Disabled }
[ 128.917529s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42522 client.id="-" timestamp=2022-09-07T18:24:50.858318639Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: take? ("http", 10.42.0.96:8080): expiration = Some(20s)
[ 128.917541s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42522 client.id="-" timestamp=2022-09-07T18:24:50.858318639Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: reuse idle connection for ("http", 10.42.0.96:8080)
[ 128.917556s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Disabled }
[ 128.917571s] TRACE ThreadId(01) encode_headers: hyper::proto::h1::role: Client::encode method=GET, body=None
[ 128.917610s] DEBUG ThreadId(01) hyper::proto::h1::io: flushed 90 bytes
[ 128.917617s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
[ 128.917626s] TRACE ThreadId(01) encode_headers: hyper::proto::h1::role: Client::encode method=GET, body=None
[ 128.917658s] DEBUG ThreadId(01) hyper::proto::h1::io: flushed 90 bytes
[ 128.917666s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
[ 128.917776s] TRACE ThreadId(01) hyper::proto::h1::conn: Conn::read_head
[ 128.917792s] TRACE ThreadId(01) hyper::proto::h1::io: received 118 bytes
[ 128.917799s] TRACE ThreadId(01) parse_headers: hyper::proto::h1::role: Response.parse bytes=118
[ 128.917808s] TRACE ThreadId(01) parse_headers: hyper::proto::h1::role: Response.parse Complete(116)
[ 128.917821s] DEBUG ThreadId(01) hyper::proto::h1::io: parsed 3 headers
[ 128.917824s] DEBUG ThreadId(01) hyper::proto::h1::conn: incoming body is content-length (2 bytes)
[ 128.917835s] TRACE ThreadId(01) hyper::proto::h1::decode: decode; state=Length(2)
[ 128.917841s] DEBUG ThreadId(01) hyper::proto::h1::conn: incoming body completed
[ 128.917846s] TRACE ThreadId(01) hyper::proto::h1::conn: maybe_notify; read_from_io blocked
[ 128.917853s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[ 128.917857s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[ 128.917874s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42512 client.id="-" timestamp=2022-09-07T18:24:50.858150690Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: put; add idle connection for ("http", 10.42.0.96:8080)
[ 128.917885s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42512 client.id="-" timestamp=2022-09-07T18:24:50.858150690Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: pooling idle connection for ("http", 10.42.0.96:8080)
[ 128.917917s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}:encode_headers: hyper::proto::h1::role: Server::encode status=200, body=Some(Known(2)), req_method=Some(GET)
[ 128.917930s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::encode: sized write, len = 2
[ 128.917934s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::io: buffer.queue self.len=116 buf.len=2
[ 128.917995s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::io: flushed 118 bytes
[ 128.918000s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::conn: State::close()
[ 128.918003s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
[ 128.918026s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: hyper::proto::h1::conn: shut down IO complete
[ 128.918031s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}:server{port=8080}: linkerd_proxy_http::server: The client is shutting down the connection res=Ok(())
[ 128.918066s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42512}: linkerd_app_core::serve: Connection closed
[ 128.918079s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[ 128.918091s] TRACE ThreadId(01) hyper::proto::h1::conn: Conn::read_head
[ 128.918096s] TRACE ThreadId(01) hyper::proto::h1::io: received 118 bytes
[ 128.918100s] TRACE ThreadId(01) parse_headers: hyper::proto::h1::role: Response.parse bytes=118
[ 128.918104s] TRACE ThreadId(01) parse_headers: hyper::proto::h1::role: Response.parse Complete(116)
[ 128.918110s] DEBUG ThreadId(01) hyper::proto::h1::io: parsed 3 headers
[ 128.918112s] DEBUG ThreadId(01) hyper::proto::h1::conn: incoming body is content-length (2 bytes)
[ 128.918115s] TRACE ThreadId(01) hyper::proto::h1::decode: decode; state=Length(2)
[ 128.918118s] DEBUG ThreadId(01) hyper::proto::h1::conn: incoming body completed
[ 128.918121s] TRACE ThreadId(01) hyper::proto::h1::conn: maybe_notify; read_from_io blocked
[ 128.918125s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[ 128.918128s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
[ 128.918138s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42522 client.id="-" timestamp=2022-09-07T18:24:50.858318639Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: put; add idle connection for ("http", 10.42.0.96:8080)
[ 128.918149s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:http{v=1.x}:http{client.addr=10.42.0.1:42522 client.id="-" timestamp=2022-09-07T18:24:50.858318639Z method="GET" uri=/healthz version=HTTP/1.1 trace_id="" request_bytes="" user_agent="kube-probe/1.23" host="10.42.0.96:8080"}:http1:profile:http1: hyper::client::pool: pooling idle connection for ("http", 10.42.0.96:8080)
[ 128.918169s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}:encode_headers: hyper::proto::h1::role: Server::encode status=200, body=Some(Known(2)), req_method=Some(GET)
[ 128.918178s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::encode: sized write, len = 2
[ 128.918181s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::io: buffer.queue self.len=116 buf.len=2
[ 128.918227s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::io: flushed 118 bytes
[ 128.918234s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::conn: State::close()
[ 128.918237s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::conn: flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
[ 128.918259s] TRACE ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: hyper::proto::h1::conn: shut down IO complete
[ 128.918265s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}:server{port=8080}: linkerd_proxy_http::server: The client is shutting down the connection res=Ok(())
[ 128.918289s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.1:42522}: linkerd_app_core::serve: Connection closed
[ 128.918299s] TRACE ThreadId(01) hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
This may be related to #9273: if the defaultbackend enforces some kind of idle timeout, it may be closing connections as a healthcheck is sent. It would help if we could locate the source code for the defaultbackend (I tried looking through https://github.com/kubernetes/ingress-nginx/ but I couldn't find anything obvious). The ingress-nginx-default-backend
container emits no logs.
https://github.com/jmreid/ingress-nginx/blob/d9f58144eb153dadab79f50038dfbd4b7db4fbe3/images/404-server/server.go appears to be a copy of the server code -- this image is no longer present in the ingress-nginx repo, though.
Its HTTP server has the settings:
return &http.Server{
Addr: addr,
Handler: handler,
ReadTimeout: 10 * time.Second,
ReadHeaderTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 10 * time.Second,
}
This could mean that the idle timeout matches the probe interval so that we basically have a race condition where the proxy sends a probe request exactly at the same time as the server tearing down the connection (i.e. #9273).
The default idle timeout was reduced in stable-2.11.5 and stable-2.12.1, which should help fix the problem for the default nginx backend. Let us know if you see this problem on recent releases!