--disable-compression makes request to localhost instead to the kubernetes API when kubectl is running in-cluster
What happened:
Running kubectl apply --disable-compression -v=10 -f ... from inside a kubernetes Pod in the kubernetes cluster tries to connect to localhost and fails:
I0603 11:40:52.188778 306 merged_client_builder.go:163] Using in-cluster namespace
I0603 11:40:52.189373 306 round_trippers.go:466] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.29.2 (linux/amd64) kubernetes/4b8e819" 'http://localhost:8080/openapi/v3?timeout=32s'
I0603 11:40:52.189822 306 round_trippers.go:495] HTTP Trace: DNS Lookup for localhost resolved to [{127.0.0.1 } {::1 }]
I0603 11:40:52.189979 306 round_trippers.go:508] HTTP Trace: Dial to tcp:127.0.0.1:8080 failed: dial tcp 127.0.0.1:8080: connect: connection refused
I0603 11:40:52.190078 306 round_trippers.go:508] HTTP Trace: Dial to tcp:[::1]:8080 failed: dial tcp [::1]:8080: connect: cannot assign requested address
I0603 11:40:52.190119 306 round_trippers.go:553] GET http://localhost:8080/openapi/v3?timeout=32s in 0 milliseconds
I0603 11:40:52.190131 306 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 0 ms
I0603 11:40:52.190136 306 round_trippers.go:577] Response Headers:
I0603 11:40:52.190192 306 fallback_query_param_verifier.go:55] openapi v3 error...falling back to legacy: Get "http://localhost:8080/openapi/v3?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0603 11:40:52.190245 306 round_trippers.go:466] curl -v -XGET -H "Accept: application/[email protected]+protobuf" -H "User-Agent: kubectl/v1.29.2 (linux/amd64) kubernetes/4b8e819" 'http://localhost:8080/openapi/v2?timeout=32s'
I0603 11:40:52.190390 306 round_trippers.go:495] HTTP Trace: DNS Lookup for localhost resolved to [{127.0.0.1 } {::1 }]
I0603 11:40:52.190511 306 round_trippers.go:508] HTTP Trace: Dial to tcp:127.0.0.1:8080 failed: dial tcp 127.0.0.1:8080: connect: connection refused
I0603 11:40:52.190587 306 round_trippers.go:508] HTTP Trace: Dial to tcp:[::1]:8080 failed: dial tcp [::1]:8080: connect: cannot assign requested address
I0603 11:40:52.190611 306 round_trippers.go:553] GET http://localhost:8080/openapi/v2?timeout=32s in 0 milliseconds
I0603 11:40:52.190621 306 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 0 ms
I0603 11:40:52.190630 306 round_trippers.go:577] Response Headers:
error: error validating "***.json": error validating data: failed to download openapi: Get "http://localhost:8080/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
What you expected to happen:
It should connect to the in-cluster API address, same as it does when --disable-compression is not used
How to reproduce it (as minimally and precisely as possible):
Run kubectl apply --disable-compression -f ... with any resource from a container running inside the kubernetes cluster, with no kubeconfig file created.
Anything else we need to know?:
NA
Environment:
- Kubernetes client and server versions (use
kubectl version): Client Version: v1.29.2 Server Version: v1.28.7 - Cloud provider or hardware configuration: AWS
- OS (e.g:
cat /etc/os-release): Ubuntu 22.04.4 LTS
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Interesting. Here is my repro:
#!/bin/bash
kubectl apply -f - << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubectl-1605-sa
namespace: default
EOF
kubectl apply -f - << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubectl-1605-role
namespace: default
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
EOF
kubectl apply -f - << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubectl-1605-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: kubectl-1605-sa
roleRef:
kind: Role
name: kubectl-1605-role
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: foo
image: bitnami/kubectl:latest
command:
- sleep
- infinity
serviceAccountName: kubectl-1605-sa
EOF
kubectl wait --for=condition=Ready pod/foo
kubectl exec -it foo -- /bin/sh -c "kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 -o yaml --dry-run=client > /tmp/my-config.yaml"
echo ""
echo "Applying configmap without --disable-compression..."
kubectl exec -it foo -- /bin/sh -c "kubectl apply -v6 -f /tmp/my-config.yaml"
echo ""
echo "Applying configmap with --disable-compression..."
kubectl exec -it foo -- /bin/sh -c "kubectl apply --disable-compression -v6 -f /tmp/my-config.yaml"
Output:
serviceaccount/kubectl-1605-sa created
role.rbac.authorization.k8s.io/kubectl-1605-role created
rolebinding.rbac.authorization.k8s.io/kubectl-1605-role-binding created
pod/foo created
pod/foo condition met
Applying configmap without --disable-compression...
I0604 14:23:32.967903 25 merged_client_builder.go:121] Using in-cluster configuration
I0604 14:23:32.968372 25 merged_client_builder.go:121] Using in-cluster configuration
I0604 14:23:32.968721 25 merged_client_builder.go:121] Using in-cluster configuration
I0604 14:23:32.968946 25 merged_client_builder.go:163] Using in-cluster namespace
I0604 14:23:32.983886 25 round_trippers.go:553] GET https://10.96.0.1:443/openapi/v3?timeout=32s 200 OK in 14 milliseconds
I0604 14:23:32.988085 25 round_trippers.go:553] GET https://10.96.0.1:443/openapi/v3/api/v1?hash=9839B31FBF66F0F9FFCEA184DF93A42274956F8781571CB54A8075F4C9DDC701B5FF6EB5593FBC1E0EB4B1B217BFB4618DB67FF440B875BC2F2FFC671BBA8B1E&timeout=32s 200 OK in 1 milliseconds
I0604 14:23:33.103112 25 round_trippers.go:553] GET https://10.96.0.1:443/api?timeout=32s 200 OK in 2 milliseconds
I0604 14:23:33.107160 25 round_trippers.go:553] GET https://10.96.0.1:443/apis?timeout=32s 200 OK in 2 milliseconds
I0604 14:23:33.114690 25 merged_client_builder.go:121] Using in-cluster configuration
I0604 14:23:33.119205 25 round_trippers.go:553] GET https://10.96.0.1:443/api/v1/namespaces/default/configmaps/my-config 404 Not Found in 4 milliseconds
I0604 14:23:33.127747 25 round_trippers.go:553] POST https://10.96.0.1:443/api/v1/namespaces/default/configmaps?fieldManager=kubectl-client-side-apply&fieldValidation=Strict 201 Created in 8 milliseconds
configmap/my-config created
I0604 14:23:33.127976 25 apply.go:541] Running apply post-processor function
Applying configmap with --disable-compression...
I0604 14:23:33.516323 37 merged_client_builder.go:163] Using in-cluster namespace
I0604 14:23:33.517759 37 round_trippers.go:553] GET http://localhost:8080/openapi/v3?timeout=32s in 0 milliseconds
I0604 14:23:33.518306 37 round_trippers.go:553] GET http://localhost:8080/openapi/v2?timeout=32s in 0 milliseconds
error: error validating "/tmp/my-config.yaml": error validating data: failed to download openapi: Get "http://localhost:8080/openapi/v2?timeout=32s": dial tcp [::1]:8080: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
command terminated with exit code 1
It is caused by the same problem as this: https://github.com/kubernetes/kubernetes/issues/93474
/close
Since this is a duplicate of https://github.com/kubernetes/kubernetes/issues/93474 lets track the issue there. I'm pretty sure I know the reason for this issue, unfortunately I don't know exactly how to fix it. Will update in the other issue.
@mpuckett159: Closing this issue.
In response to this:
/close
Since this is a duplicate of https://github.com/kubernetes/kubernetes/issues/93474 lets track the issue there. I'm pretty sure I know the reason for this issue, unfortunately I don't know exactly how to fix it. Will update in the other issue.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.