kubectl-iexec
kubectl-iexec copied to clipboard
Compability with Teleport
I would be great to see the iexec plugin working with Teleport.
Whenever trying to connect proxied via Teleport, it does not move along.
❯ kubectl iexec --insecure-skip-tls-verify --log-level trace postfix /bin/bash
DEBU[0000] iexec config values... Naked=false Namespace=default Vim Mode=false containerFilter= podFilter=postfix remote command="[/bin/bash]"
DEBU[0000] total pods discovered... namespace=default pods=43
INFO[0000] Get all pods for podFilter... SearchFilter=postfix
INFO[0000] Found pod... PodName=postfix-6b89f86fb-4b277 index=23
INFO[0000] Exec to pod... container=postfix namespace=default pod=postfix-6b89f86fb-4b277
DEBU[0000] Request URL="https://teleport.<REDACTED>:3026/api/v1/namespaces/default/pods/postfix-6b89f86fb-4b277/exec?command=%2Fbin%2Fbash&container=postfix&stderr=true&stdin=true&stdout=true&tty=true"
Thanks for raising this @tchellomello , this is an interesting issue. I don't have teleport to test this out. Is there any specific version of the kubectl client that supports exec calls with teleport? I'm using the exec subcommand in the client-go restclient library but maybe it needs additional parameters. Can you get me the verbose logs from a successful exec call so I can inspect the URL?
Hello @gabeduke, for a reproducer you can install a Teleport instance pretty easy by following the doc here -> https://goteleport.com/docs/deploy-a-cluster/helm-deployments/custom/
So I tried to do a bit of troubleshooting and I was exactly on this path to compare the exec with the iexec call.
I've added a few debug calls to the iexec and this is what I'm getting back:
❯ ./kubectl-iexec --log-level=trace postfix
DEBU[0000] iexec config values... Naked=false Namespace=default Vim Mode=false containerFilter= podFilter=postfix remote command="[/bin/sh]"
DEBU[0000] total pods discovered... namespace=default pods=43
INFO[0000] Get all pods for podFilter... SearchFilter=postfix
INFO[0000] Found pod... PodName=postfix-6b89f86fb-4b277 index=23
INFO[0000] Exec to pod... container=postfix namespace=default pod=postfix-6b89f86fb-4b277
&rest.Config{Host:"https://teleport.REDACTED:3026", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:api.ExecConfig{Command: "/usr/bin/tsh", Args: []string{"--- REDACTED ---"}, Env: []ExecEnvVar(nil), APIVersion: "client.authentication.k8s.io/v1beta1", ProvideClusterInfo: false, Config: runtime.Object(nil), StdinUnavailable: false}, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"", KeyFile:"", CAFile:"", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8{0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x42, 0x45, 0x47, 0x49, 0x4e, 0x20, 0x43, 0x45, [....x]
DEBU[0000] Request URL="https://teleport.REDACTED:3026/api/v1/namespaces/default/pods/postfix-6b89f86fb-4b277/exec?command=%2Fbin%2Fsh&container=postfix&stderr=true&stdin=true&stdout=true&tty=true"
&{0xc000136780 <nil> 0xc000554ee0 0x37a8c18 0 POST /api/v1 map[command:[/bin/sh] container:[postfix] stderr:[true] stdin:[true] stdout:[true] tty:[true]] map[Accept:[application/json, */*]] default true pods postfix-6b89f86fb-4b277 exec <nil> <nil> 0xc000051e20}
&{0xc000564bc0 0xc000556ff0 POST 0xc000622480 [v4.channel.k8s.io v3.channel.k8s.io v2.channel.k8s.io channel.k8s.io]}
Looking at the teleport server, I see that the stream channel is open, but it looks the iexec timeout or does not attach to it.
Here is what I get via regular exec:
❯ kubectl -v=8 exec -it postfix-6b89f86fb-4b277 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
I0623 00:22:16.590616 89515 loader.go:373] Config loaded from file: /home/mmello/.kube/config
I0623 00:22:16.597240 89515 round_trippers.go:463] GET https://teleport.REDACTED:3026/api/v1/namespaces/default/pods/postfix-6b89f86fb-4b277
I0623 00:22:16.597253 89515 round_trippers.go:469] Request Headers:
I0623 00:22:16.597261 89515 round_trippers.go:473] Accept: application/json, */*
I0623 00:22:16.597267 89515 round_trippers.go:473] User-Agent: kubectl/v1.27.1 (linux/amd64) kubernetes/4c94112
I0623 00:22:16.753748 89515 round_trippers.go:574] Response Status: 200 OK in 156 milliseconds
I0623 00:22:16.753812 89515 round_trippers.go:577] Response Headers:
I0623 00:22:16.753858 89515 round_trippers.go:580] Audit-Id: 94e72927-d28b-4da8-92fa-6c1f9723d47e
I0623 00:22:16.753899 89515 round_trippers.go:580] Content-Type: application/json
I0623 00:22:16.753934 89515 round_trippers.go:580] Referrer-Policy: strict-origin
I0623 00:22:16.753969 89515 round_trippers.go:580] Referrer-Policy: strict-origin
I0623 00:22:16.754004 89515 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 14ad364c-88e8-4826-a41e-142ca8ff33ab
I0623 00:22:16.754047 89515 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 66df03e5-4eff-476a-9a55-924b82d61f32
I0623 00:22:16.754090 89515 round_trippers.go:580] Cache-Control: no-cache, no-store, must-revalidate
I0623 00:22:16.754132 89515 round_trippers.go:580] Cache-Control: no-cache, no-store, must-revalidate
I0623 00:22:16.754173 89515 round_trippers.go:580] Cache-Control: no-cache, private
I0623 00:22:16.754216 89515 round_trippers.go:580] Date: Fri, 23 Jun 2023 04:22:16 GMT
I0623 00:22:16.754277 89515 round_trippers.go:580] X-Content-Type-Options: nosniff
I0623 00:22:16.754319 89515 round_trippers.go:580] X-Content-Type-Options: nosniff
I0623 00:22:16.754344 89515 round_trippers.go:580] X-Frame-Options: SAMEORIGIN
I0623 00:22:16.754365 89515 round_trippers.go:580] X-Frame-Options: SAMEORIGIN
I0623 00:22:16.754407 89515 round_trippers.go:580] Expires: 0
I0623 00:22:16.754440 89515 round_trippers.go:580] Expires: 0
I0623 00:22:16.754463 89515 round_trippers.go:580] Pragma: no-cache
I0623 00:22:16.754484 89515 round_trippers.go:580] Pragma: no-cache
I0623 00:22:16.754518 89515 round_trippers.go:580] Strict-Transport-Security: max-age=31536000; includeSubDomains
I0623 00:22:16.754557 89515 round_trippers.go:580] Strict-Transport-Security: max-age=31536000; includeSubDomains
I0623 00:22:16.754598 89515 round_trippers.go:580] X-Xss-Protection: 1; mode=block
I0623 00:22:16.754638 89515 round_trippers.go:580] X-Xss-Protection: 1; mode=block
I0623 00:22:16.758386 89515 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"postfix-6b89f86fb-4b277","generateName":"postfix-6b89f86fb-","namespace":"default","uid":"511c6bdf-cd10-4ace-bda4-f2b0482923aa","resourceVersion":"357971528","creationTimestamp":"2023-04-15T02:48:00Z","labels":{"app":"postfix","pod-template-hash":"6b89f86fb"},"annotations":{"cni.projectcalico.org/containerID":"a2363f8e0fa46258eda738be4fa473b37268f229deec8e7fba92b19f7c8cf3f3","cni.projectcalico.org/podIP":"10.223.42.204/32","cni.projectcalico.org/podIPs":"10.223.42.204/32","kubectl.kubernetes.io/restartedAt":"2023-03-13T22:10:07-04:00","toca-infra/kyverno-ignore-image-tag":"true","vault.security.banzaicloud.io/vault-agent":"false"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"postfix-6b89f86fb","uid":"e73f3a57-f578-44ee-9b97-3d3f4109d1a8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-04-15T02:47:55Z","fieldsType":"FieldsV1","fiel [truncated 7409 chars]
Defaulted container "postfix" out of: postfix, copy-vault-env (init)
I0623 00:22:16.766180 89515 podcmd.go:88] Defaulting container name to postfix
I0623 00:22:16.767074 89515 round_trippers.go:463] POST https://teleport.REDACTED:3026/api/v1/namespaces/default/pods/postfix-6b89f86fb-4b277/exec?command=%2Fbin%2Fbash&container=postfix&stdin=true&stdout=true&tty=true
I0623 00:22:16.767104 89515 round_trippers.go:469] Request Headers:
I0623 00:22:16.767129 89515 round_trippers.go:473] User-Agent: kubectl/v1.27.1 (linux/amd64) kubernetes/4c94112
I0623 00:22:16.767150 89515 round_trippers.go:473] X-Stream-Protocol-Version: v4.channel.k8s.io
I0623 00:22:16.767173 89515 round_trippers.go:473] X-Stream-Protocol-Version: v3.channel.k8s.io
I0623 00:22:16.767194 89515 round_trippers.go:473] X-Stream-Protocol-Version: v2.channel.k8s.io
I0623 00:22:16.767215 89515 round_trippers.go:473] X-Stream-Protocol-Version: channel.k8s.io
I0623 00:22:16.827032 89515 round_trippers.go:574] Response Status: 101 Switching Protocols in 59 milliseconds
I0623 00:22:16.827080 89515 round_trippers.go:577] Response Headers:
I0623 00:22:16.827103 89515 round_trippers.go:580] Connection: Upgrade
I0623 00:22:16.827122 89515 round_trippers.go:580] Expires: 0
I0623 00:22:16.827142 89515 round_trippers.go:580] Pragma: no-cache
I0623 00:22:16.827162 89515 round_trippers.go:580] Referrer-Policy: strict-origin
I0623 00:22:16.827181 89515 round_trippers.go:580] Strict-Transport-Security: max-age=31536000; includeSubDomains
I0623 00:22:16.827200 89515 round_trippers.go:580] X-Stream-Protocol-Version: v4.channel.k8s.io
I0623 00:22:16.827220 89515 round_trippers.go:580] Cache-Control: no-cache, no-store, must-revalidate
I0623 00:22:16.827240 89515 round_trippers.go:580] Upgrade: SPDY/3.1
I0623 00:22:16.827259 89515 round_trippers.go:580] X-Content-Type-Options: nosniff
I0623 00:22:16.827278 89515 round_trippers.go:580] X-Frame-Options: SAMEORIGIN
I0623 00:22:16.827297 89515 round_trippers.go:580] X-Xss-Protection: 1; mode=block
I'm planning to attach a dlv at https://github.com/gabeduke/kubectl-iexec/blob/main/pkg/iexec/iexec.go#L176 to see I can get more information.
I'll share any findings I get on my end.
Thanks!
Doing a little bit of debug, here is what I found so far:
❯ go build -gcflags="all=-N -l" kubectl-iexec.go
❯ dlv exec ./kubectl-iexec -- --log-level=trace postfix
bType 'help' for list of commands.
(dlv) break exec
Breakpoint 1 set at 0x2465592 for github.com/gabeduke/kubectl-iexec/pkg/iexec.exec() ./pkg/iexec/iexec.go:151
(dlv) c
DEBU[0000] iexec config values... Naked=false Namespace=default Vim Mode=false containerFilter= podFilter=postfix remote command="[/bin/sh]"
DEBU[0000] total pods discovered... namespace=default pods=43
INFO[0000] Get all pods for podFilter... SearchFilter=postfix
INFO[0000] Found pod... PodName=postfix-6b89f86fb-4b277 index=23
INFO[0000] Exec to pod... container=postfix namespace=default pod=postfix-6b89f86fb-4b277
> github.com/gabeduke/kubectl-iexec/pkg/iexec.exec() ./pkg/iexec/iexec.go:151 (hits goroutine(1):1 total:1) (PC: 0x2465592)
146: return err
147: }
148: return nil
149: }
150:
=> 151: func exec(restCfg *rest.Config, pod corev1.Pod, container corev1.Container, cmd []string) error {
152: client, err := kubernetes.NewForConfig(restCfg)
153: if err != nil {
154: return errors.Wrap(err, "unable to get kubernetes client config")
155: }
156:
(dlv) args
restCfg = ("*k8s.io/client-go/rest.Config")(0xc0003eed80)
pod = k8s.io/api/core/v1.Pod {TypeMeta: (*"k8s.io/apimachinery/pkg/apis/meta/v1.TypeMeta")(0xc000157d98), ObjectMeta: (*"k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta")(0xc000157db8), Spec: (*"k8s.io/api/core/v1.PodSpec")(0xc000157eb0),...+1 more}
container = k8s.io/api/core/v1.Container {Name: "postfix", Image: "docker.io/REDACTED", Command: []string len: 1, cap: 1, [...],...+19 more}
cmd = []string len: 1, cap: 1, [...]
~r0 = error nil
### --> container information seems accurate
(dlv) print container
k8s.io/api/core/v1.Container {
Name: "postfix",
Image: "docker.io/xxxxxx",
Command: []string len: 1, cap: 1, [
"/vault/vault-env",
#### --> same for the pod object
(dlv) print pod
k8s.io/api/core/v1.Pod {
TypeMeta: k8s.io/apimachinery/pkg/apis/meta/v1.TypeMeta {Kind: "", APIVersion: ""},
ObjectMeta: k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta {
Name: "postfix-6b89f86fb-4b277",
GenerateName: "postfix-6b89f86fb-",
Namespace: "default",
### -> the restCfg object
(dlv) print restCfg
*k8s.io/client-go/rest.Config {
Host: "https://teleport.REDACTED:3026",
APIPath: "",
ContentConfig: k8s.io/client-go/rest.ContentConfig {
AcceptContentTypes: "",
ContentType: "",
GroupVersion: *k8s.io/apimachinery/pkg/runtime/schema.GroupVersion nil,
NegotiatedSerializer: k8s.io/apimachinery/pkg/runtime.NegotiatedSerializer nil,},
Username: "",
Password: "",
BearerToken: "",
BearerTokenFile: "",
Impersonate: k8s.io/client-go/rest.ImpersonationConfig {
UserName: "",
Groups: []string len: 0, cap: 0, nil,
Extra: map[string][]string nil,},
AuthProvider: *k8s.io/client-go/tools/clientcmd/api.AuthProviderConfig nil,
AuthConfigPersister: k8s.io/client-go/rest.AuthProviderConfigPersister nil,
ExecProvider: *k8s.io/client-go/tools/clientcmd/api.ExecConfig {
Command: "/usr/bin/tsh",
Args: []string len: 5, cap: 8, [
"kube",
"credentials",
"--kube-cluster=teleport.REDACTED",
"--teleport-cluster=teleport.REDACTED",
"--proxy=teleport.REDACTED:443",
],
Env: []k8s.io/client-go/tools/clientcmd/api.ExecEnvVar len: 0, cap: 0, nil,
APIVersion: "client.authentication.k8s.io/v1beta1",
InstallHint: "",
ProvideClusterInfo: false,
Config: k8s.io/apimachinery/pkg/runtime.Object nil,
InteractiveMode: "IfAvailable",
StdinUnavailable: false,
StdinUnavailableMessage: "",},
TLSClientConfig: k8s.io/client-go/rest.TLSClientConfig {
Insecure: false,
ServerName: "",
CertFile: "",
KeyFile: "",
CAFile: "",
CertData: []uint8 len: 0, cap: 0, nil,
KeyData: []uint8 len: 0, cap: 0, nil,
CAData: []uint8 len: 1322, cap: 1323, [45,45,45,45,45,66,69,71,73,78,32,67,69,82,84,73,70,73,67,65,84,69,45,45,45,45,45,10,77,73,73,68,111,106,67,67,65,111,113,103,65,119,73,66,65,103,73,81,76,57,68,67,119,72,102,119,85,110,88,78,99,121,122,85,...+1258 more],
NextProtos: []string len: 0, cap: 0, nil,},
UserAgent: "",
DisableCompression: false,
Transport: net/http.RoundTripper nil,
WrapTransport: nil,
QPS: 0,
Burst: 0,
RateLimiter: k8s.io/client-go/util/flowcontrol.RateLimiter nil,
WarningHandler: k8s.io/client-go/rest.WarningHandler nil,
Timeout: 0,
Dial: nil,
Proxy: nil,}
So moving on the code and inspecting the req object we have:
> github.com/gabeduke/kubectl-iexec/pkg/iexec.exec() ./pkg/iexec/iexec.go:172 (PC: 0x2465984)
167: Stdout: true,
168: Stderr: true,
169: TTY: true,
170: }, scheme.ParameterCodec)
171:
=> 172: log.WithFields(log.Fields{
173: "URL": req.URL(),
174: }).Debug("Request")
175:
176: exec, err := remotecommand.NewSPDYExecutor(restCfg, "POST", req.URL())
177: if err != nil {
(dlv) print req
*k8s.io/client-go/rest.Request {
c: *k8s.io/client-go/rest.RESTClient {
base: *(*"net/url.URL")(0xc000617440),
versionedAPIPath: "/api/v1",
content: (*"k8s.io/client-go/rest.ClientContentConfig")(0xc0003a6e78),
createBackoffMgr: k8s.io/client-go/rest.readExpBackoffConfig,
rateLimiter: k8s.io/client-go/util/flowcontrol.RateLimiter(*k8s.io/client-go/util/flowcontrol.tokenBucketRateLimiter) ...,
warningHandler: k8s.io/client-go/rest.WarningHandler nil,
Client: *(*"net/http.Client")(0xc00015b560),},
warningHandler: k8s.io/client-go/rest.WarningHandler nil,
rateLimiter: k8s.io/client-go/util/flowcontrol.RateLimiter(*k8s.io/client-go/util/flowcontrol.tokenBucketRateLimiter) *{
limiter: *(*"golang.org/x/time/rate.Limiter")(0xc000112f00),
clock: k8s.io/client-go/util/flowcontrol.Clock(k8s.io/client-go/util/flowcontrol.realClock) *(*"k8s.io/client-go/util/flowcontrol.Clock")(0xc0000ea588),
qps: 5,},
backoff: k8s.io/client-go/rest.BackoffManager(*k8s.io/client-go/rest.NoBackoff) *{},
timeout: 0,
verb: "POST",
pathPrefix: "/api/v1",
subpath: "",
params: net/url.Values [
"stdin": ["true"],
"stdout": ["true"],
"stderr": ["true"],
"tty": ["true"],
"container": ["postfix"],
"command": ["/bin/sh"],
],
headers: net/http.Header [
"Accept": [
"application/json, */*",
],
],
namespace: "default",
namespaceSet: true,
resource: "pods",
resourceName: "postfix-6b89f86fb-4b277",
subresource: "exec",
err: error nil,
body: io.Reader nil,
retry: k8s.io/client-go/rest.WithRetry(*k8s.io/client-go/rest.withRetry) *{maxRetries: 10, attempts: 0},}
At the exec var, we have:
> github.com/gabeduke/kubectl-iexec/pkg/iexec.exec() ./pkg/iexec/iexec.go:177 (PC: 0x2465c07)
172: log.WithFields(log.Fields{
173: "URL": req.URL(),
174: }).Debug("Request")
175:
176: exec, err := remotecommand.NewSPDYExecutor(restCfg, "POST", req.URL())
=> 177: if err != nil {
178: return errors.Wrap(err, "unable to execute remote command")
179: }
180:
181: fd := int(os.Stdin.Fd())
182:
(dlv) print exec.url.Path
"/api/v1/namespaces/default/pods/postfix-6b89f86fb-4b277/exec"
(dlv) print exec.url.RawPath
""
(dlv) print exec.url.RawQuery
"command=%2Fbin%2Fsh&container=postfix&stderr=true&stdin=true&std...+17 more"
Then I saw this:
> github.com/gabeduke/kubectl-iexec/pkg/iexec.exec() ./pkg/iexec/iexec.go:185 (PC: 0x2465d15)
180:
181: fd := int(os.Stdin.Fd())
182:
183: // Put the terminal into raw mode to prevent it echoing characters twice.
184: oldState, err := term.MakeRaw(fd)
=> 185: if err != nil {
186: return errors.Wrap(err, "unable to init terminal")
187: }
188:
189: termWidth, termHeight, _ := term.GetSize(fd)
190: termSize := remotecommand.TerminalSize{Width: uint16(termWidth), Height: uint16(termHeight)}
(dlv) print oldState
*golang.org/x/term.State nil
(dlv) print err
error(syscall.Errno) golang.org/x/sys/unix.ENOTTY (25)
> github.com/gabeduke/kubectl-iexec/pkg/iexec.(*Iexec).Do() ./pkg/iexec/iexec.go:145 (PC: 0x246550b)
140: "container": container.Name,
141: "namespace": r.config.Namespace,
142: }).Info("Exec to pod...")
143:
144: err = exec(r.restConfig, pod, container, r.config.RemoteCmd)
=> 145: if err != nil {
146: return err
147: }
148: return nil
149: }
150:
(dlv) print err
error(*github.com/pkg/errors.withStack) *{
error: error(*github.com/pkg/errors.withMessage) *{
cause: error(syscall.Errno) *(*error)(0xc00012a2a0),
msg: "unable to init terminal",},
stack: *github.com/pkg/errors.stack len: 11, cap: 32, [38165805,38163707,40890360,40886766,38437233,38441243,38438424,40890921,40891031,12678739,12885153],}
Then finally get into the log.Fatal(err) and terminates.
It looks like it is failing when it attempts to spawn a new shell. I'm assuming there is some sort of callback that is being lost. I spent some time trying to set up a teleport cluster but wasn't able to get it working locally. I'll give it another shot when I get some free time.
@gabeduke interesting you saying that because on the Teleport's side I see the timeout.
teleport-proxy-7f8c9c4cc6-xc4jj teleport 2023-07-18T16:01:37Z DEBU [PROXY:PRO] Exec request failed error:[
teleport-proxy-7f8c9c4cc6-xc4jj teleport ERROR REPORT:
teleport-proxy-7f8c9c4cc6-xc4jj teleport Original Error: trace.aggregate timed out waiting for client to create streams
teleport-proxy-7f8c9c4cc6-xc4jj teleport Stack Trace:
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/teleport/lib/kube/proxy/remotecommand.go:179 github.com/gravitational/teleport/lib/kube/proxy.createSPDYStreams
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/teleport/lib/kube/proxy/remotecommand.go:105 github.com/gravitational/teleport/lib/kube/proxy.createRemoteCommandProxy
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/teleport/lib/kube/proxy/forwarder.go:1768 github.com/gravitational/teleport/lib/kube/proxy.(*Forwarder).exec
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/teleport/lib/kube/proxy/forwarder.go:684 github.com/gravitational/teleport/lib/kube/proxy.(*Forwarder).withAuth.func1
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/teleport/lib/httplib/httplib.go:117 github.com/gravitational/teleport/lib/httplib.MakeHandlerWithErrorWriter.func1
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/julienschmidt/[email protected]/router.go:399 github.com/julienschmidt/httprouter.(*Router).ServeHTTP
teleport-proxy-7f8c9c4cc6-xc4jj teleport go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:213 go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*Handler).ServeHTTP
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/teleport/lib/kube/proxy/forwarder.go:412 github.com/gravitational/teleport/lib/kube/proxy.(*Forwarder).ServeHTTP
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/teleport/lib/auth/middleware.go:696 github.com/gravitational/teleport/lib/auth.(*Middleware).ServeHTTP
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/[email protected]/ratelimit/tokenlimiter.go:118 github.com/gravitational/oxy/ratelimit.(*TokenLimiter).ServeHTTP
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/[email protected]/connlimit/connlimit.go:75 github.com/gravitational/oxy/connlimit.(*ConnLimiter).ServeHTTP
teleport-proxy-7f8c9c4cc6-xc4jj teleport github.com/gravitational/teleport/lib/httplib/httplib.go:94 github.com/gravitational/teleport/lib/httplib.MakeTracingHandler.func1
teleport-proxy-7f8c9c4cc6-xc4jj teleport net/http/server.go:2122 net/http.HandlerFunc.ServeHTTP
teleport-proxy-7f8c9c4cc6-xc4jj teleport go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:213 go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*Handler).ServeHTTP
teleport-proxy-7f8c9c4cc6-xc4jj teleport net/http/server.go:2936 net/http.serverHandler.ServeHTTP
teleport-proxy-7f8c9c4cc6-xc4jj teleport net/http/server.go:1995 net/http.(*conn).serve
teleport-proxy-7f8c9c4cc6-xc4jj teleport runtime/asm_amd64.s:1598 runtime.goexit
teleport-proxy-7f8c9c4cc6-xc4jj teleport User Message: timed out waiting for client to create streams] pid:7.1 proxy/forwarder.go:1725
@gabeduke I work with @tchellomello and we were diving deeper into this issue.
In an attempt to understand the difference between iexec and kubectl exec I put a spew.Dump on the exec in both the iexec and kubectl exec. There were over 1000 lines. After going through the outputs there were very few differences that were noted.
The kubectl version uses a genericclioptions.CommandHeaderRoundTripper which has a Delegate attribute which then uses the same (*spdy.SpdyRoundTripper) as iexec.
The kubectl exec which works adds a couple of additional headers:
Headers: (map[string]string) (len=2) {
(string) (len=15) "Kubectl-Session": (string) (len=36) "fb9b0aa2-83c3-44ba-a154-50e17ddf516c",
(string) (len=15) "Kubectl-Command": (string) (len=12) "kubectl exec"
}
The iexec which is currently hitting the timeout does differ by the following redirect option. I'm not sure this is the issue but I did want to make a note.
base: (*spdy.SpdyRoundTripper)(0xc000780a40)({
followRedirects: (bool) true,
The cachedCreds that were dumped were slightly different but I think this is expected during teleports communications. Both commands authenticate and are the same in the kubeconfig which is used to create the restConfig but I believe there is a cert generated for the communication.
Another difference was the agent string:
exec
transport: (*transport.userAgentRoundTripper)(0xc00076a860)({
agent: (string) (len=47) "kubectl/v0.0.0 (linux/amd64) kubernetes/$Format",
I did not see the agent in the iexec.
The only other noticeable difference was in the the transport conns. The exec used a single and was defined like this:
exec
conns: (map[*connrotation.closableConn]struct {}) (len=1) {
(*connrotation.closableConn)(0xc000012060)({
onClose: (func()) 0x1dd9240,
Conn: (*net.TCPConn)(0xc00025e000)({
conn: (net.conn) {
fd: (*net.netFD)(0xc00074a200)({
pfd: (poll.FD) {
fdmu: (poll.fdMutex) {
state: (uint64) 10,
rsema: (uint32) 0,
wsema: (uint32) 0
},
Sysfd: (int) 3,
pd: (poll.pollDesc) {
runtimeCtx: (uintptr) 0x7fbabcf4f578
},
iovecs: (*[]syscall.Iovec)(<nil>),
csema: (uint32) 0,
isBlocking: (uint32) 0,
IsStream: (bool) true,
ZeroReadIsEOF: (bool) true,
isFile: (bool) false
},
family: (int) 2,
sotype: (int) 1,
isConnected: (bool) true,
net: (string) (len=3) "tcp",
laddr: (*net.TCPAddr)(0xc000b72000)(192.168.1.25:38062),
raddr: (*net.TCPAddr)(0xc000b72030)(13.248.xxx.xxx:443)
})
}
})
}): (struct {}) {
iexec
conns: (map[*connrotation.closableConn]struct {}) (len=1) {
(*connrotation.closableConn)(0xc000782018)({
onClose: (func()) 0x18204a0,
Conn: (*net.TCPConn)(0xc0000d6000)({
conn: (net.conn) {
fd: (*net.netFD)(0xc000185c80)({
pfd: (poll.FD) {
fdmu: (poll.fdMutex) {
state: (uint64) 10,
rsema: (uint32) 0,
wsema: (uint32) 0
},
Sysfd: (int) 6,
pd: (poll.pollDesc) {
runtimeCtx: (uintptr) 0x7fc5b87b7658
},
iovecs: (*[]syscall.Iovec)(<nil>),
csema: (uint32) 0,
isBlocking: (uint32) 0,
IsStream: (bool) true,
ZeroReadIsEOF: (bool) true,
isFile: (bool) false
},
family: (int) 2,
sotype: (int) 1,
isConnected: (bool) true,
net: (string) (len=3) "tcp",
laddr: (*net.TCPAddr)(0xc0000d4000)(192.168.1.25:33356),
raddr: (*net.TCPAddr)(0xc0000d4030)(13.248.xxx.xxx:443)
})
}
})
}): (struct {}) {
}
}
})
}),
The Sysfd is set to 3 in kubectl exec and 6 in the iexec.
I walked the communications down the following path: https://github.com/gabeduke/kubectl-iexec/blob/main/pkg/iexec/iexec.go#L202 github.com/gabeduke/kubectl-iexec/vendor/k8s.io/client-go/tools/remotecommand/remotecommand.go L 141
return streamer.stream(conn)
github.com/gabeduke/kubectl-iexec/vendor/k8s.io/client-go/tools/remotecommand/v4.go L 55
if err := p.createStreams(conn); err != nil {
Then the createStreams calls happen:
return p.streamProtocolV3.createStreams(conn)
Which calls createStreams from v3 and then to v2 github.com/gabeduke/kubectl-iexec/vendor/k8s.io/client-go/tools/remotecommand/v3.go
if err := p.streamProtocolV2.createStreams(conn); err != nil {
github.com/gabeduke/kubectl-iexec/vendor/k8s.io/client-go/tools/remotecommand/v2.go L 57
p.errorStream, err = conn.CreateStream(headers)
github.com/gabeduke/kubectl-iexec/vendor/k8s.io/apimachinery/pkg/util/httpstream/spdy/connection.go 137
stream, err := c.conn.CreateStream(headers, nil, false)
# headers
net/http.Header ["Streamtype": ["error"], ]
github.com/gabeduke/kubectl-iexec/vendor/github.com/moby/spdystream/connection.go
return s.framer.WriteFrame(streamFrame)
This now goes down a few more layers into write and io.Writer.
The last place I could track this was here: go1.18/src/internal/poll/fd_mutex.go
if !fd.fdmu.rwlock(false) {
return errClosing(fd.isFile)
}
This fails and then returns. The error observed here is EOF.