kubectl-node-shell icon indicating copy to clipboard operation
kubectl-node-shell copied to clipboard

Error: Container "nsenter" is not available

Open codebling opened this issue 2 years ago • 3 comments
trafficstars

When I try to run the command, I get

spawning "nsenter-1iiih5" on "node"
Error from server (BadRequest): container "nsenter" in pod "nsenter-1iiih5" is not available
pod "nsenter-1iiih5" deleted

Here's a shell log:

❯ sh -x /usr/bin/kubectl-node_shell node
+ set -e
+ kubectl=kubectl
+ version=1.7.0
+ generator=
+ node=
+ nodefaultctx=0
+ nodefaultns=0
+ container_cpu=100m
+ container_memory=256Mi
+ labels=
+ '[' -t 0 ']'
+ tty=true
+ '[' 1 -gt 0 ']'
+ key=node
+ case $key in
+ '[' -z '' ']'
+ node=node
+ shift
+ '[' 0 -gt 0 ']'
+ '[' -z node ']'
+ '[' 0 = 1 ']'
++ kubectl config current-context
+ kubectl='kubectl --context=context'
+ '[' 0 = 1 ']'
++ kubectl --context=context config view --minify --output 'jsonpath={.contexts..namespace}'
+ kubectl='kubectl --context=context --namespace=namespace'
++ kubectl --context=context --namespace=namespace get node node -o 'jsonpath={.metadata.labels.kubernetes\.io/os}'
+ os=linux
+ '[' linux = windows ']'
+ image=docker.io/library/alpine
+ name=nsenter
++ env LC_ALL=C tr -dc a-z0-9
++ head -c 6
+ pod=nsenter-iv2at7
+ cmd_start='"nsenter", "--target", "1", "--mount", "--uts", "--ipc", "--net", "--pid"'
+ cmd_arg_prefix=', "--"'
+ cmd_default=', "bash", "-l"'
+ security_context='{"privileged":true}'
+ '[' 0 -gt 0 ']'
+ cmd='[ "nsenter", "--target", "1", "--mount", "--uts", "--ipc", "--net", "--pid" , "bash", "-l" ]'
++ cat
+ overrides='{
  "spec": {
    "nodeName": "node",
    "hostPID": true,
    "hostNetwork": true,
    "containers": [
      {
        "securityContext": {"privileged":true},
        "image": "docker.io/library/alpine",
        "name": "nsenter",
        "stdin": true,
        "stdinOnce": true,
        "tty": true,
        "command": [ "nsenter", "--target", "1", "--mount", "--uts", "--ipc", "--net", "--pid" , "bash", "-l" ],
        "resources": {
          "limits":   { "cpu": "100m", "memory": "256Mi" },
          "requests": { "cpu": "100m", "memory": "256Mi" }
        }
      }
    ],
    "tolerations": [
      { "key": "CriticalAddonsOnly", "operator": "Exists" },
      { "effect": "NoExecute",       "operator": "Exists" }
    ]
  }
}'
++ kubectl version --client -o yaml
++ awk '-F[ :"]+' '$2 == "minor" {print $3+0}'
+ m=26
+ '[' 26 -lt 18 ']'
+ trap 'EC=$?; kubectl --context=context --namespace=namespace delete pod --wait=false nsenter-iv2at7 >&2 || true; exit $EC' EXIT INT TERM
+ echo 'spawning "nsenter-iv2at7" on "node"'
spawning "nsenter-iv2at7" on "node"
++ '[' true = true ']'
++ echo -t
+ kubectl --context=context --namespace=namespace run --image docker.io/library/alpine --restart=Never '--overrides={
  "spec": {
    "nodeName": "node",
    "hostPID": true,
    "hostNetwork": true,
    "containers": [
      {
        "securityContext": {"privileged":true},
        "image": "docker.io/library/alpine",
        "name": "nsenter",
        "stdin": true,
        "stdinOnce": true,
        "tty": true,
        "command": [ "nsenter", "--target", "1", "--mount", "--uts", "--ipc", "--net", "--pid" , "bash", "-l" ],
        "resources": {
          "limits":   { "cpu": "100m", "memory": "256Mi" },
          "requests": { "cpu": "100m", "memory": "256Mi" }
        }
      }
    ],
    "tolerations": [
      { "key": "CriticalAddonsOnly", "operator": "Exists" },
      { "effect": "NoExecute",       "operator": "Exists" }
    ]
  }
}' --labels= -t -i nsenter-iv2at7
Error from server (BadRequest): container "nsenter" in pod "nsenter-iv2at7" is not available
+ EC=1
+ kubectl --context=context --namespace=namespace delete pod --wait=false nsenter-iv2at7
pod "nsenter-iv2at7" deleted
+ exit 1

codebling avatar Feb 17 '23 20:02 codebling

I just ran into this. For me, it was due to the node being too small for the 256Mi request. Comment out the code that automatically deletes the pod on failure, run it again, and then you can see the issue with kubectl get pods and kubectl describe pod <pod-name>.

Would be nice for better diagnostics to be available on failures or at least a flag to disable the pod auto-delete on failure.

pinealservo avatar Oct 26 '23 22:10 pinealservo

@pinealservo awesome, thanks for figuring that out!!

codebling avatar Oct 29 '23 15:10 codebling

What (if any) is the solution to nodes being too small? Many Thanks

jfouche-vendavo avatar Aug 06 '24 13:08 jfouche-vendavo