Agent pod failed
Hi Team,
I have installed version 0.1.1
kubectl debug --version
debug version v0.0.0-master+$Format:%h$
It failed to debug my pods, then I checked agent-pod, it didn't run successfully. The following is its status
oc get pods -n default
NAME READY STATUS RESTARTS AGE
debug-agent-pod-563e0614-4b0a-11eb-be05-dca904978b2f 0/1 ImagePullBackOff 0 44m
debug-agent-pod-5dc047e6-4b0d-11eb-9bc4-dca904978b2f 0/1 PodFitsHostPorts 0 22m
debug-agent-pod-a6c36144-4b0d-11eb-b6a7-dca904978b2f 0/1 PodFitsHostPorts 0 20m
Could you let me know: (1) Failed to Pulling image "aylei/debug-agent:latest", how to fix it? Or What is the credential for pulling down image? (2) How to fix PodFitsHostPorts?
In addition, the below is my configuration:
# debug agent listening port(outside container)
# default to 10027
agentPort: 10027
# whether using agentless mode
# default to true
agentless: true
# namespace of debug-agent pod, used in agentless mode
# default to 'default'
agentPodNamespace: default
# prefix of debug-agent pod, used in agentless mode
# default to 'debug-agent-pod'
agentPodNamePrefix: debug-agent-pod
# image of debug-agent pod, used in agentless mode
# default to 'aylei/debug-agent:latest'
agentImage: aylei/debug-agent:latest
# daemonset name of the debug-agent, used in port-forward
# default to 'debug-agent'
debugAgentDaemonset: debug-agent
# daemonset namespace of the debug-agent, used in port-forwad
# default to 'default'
debugAgentNamespace: kube-system
# whether using port-forward when connecting debug-agent
# default true
portForward: true
# image of the debug container
# default as showed
image: nicolaka/netshoot:latest
# start command of the debug container
# default ['bash']
command:
- '/bin/bash'
- '-l'
# private docker registry auth kuberntes secret
# default registrySecretName is kubectl-debug-registry-secret
# default registrySecretNamespace is default
registrySecretName: my-debug-secret
registrySecretNamespace: debug
# in agentless mode, you can set the agent pod's resource limits/requests:
# default is not set
agentCpuRequests: ""
agentCpuLimits: ""
agentMemoryRequests: ""
agentMemoryLimits: ""
# in fork mode, if you want the copied pod retains the labels of the original pod, you can change this params
# format is []string
# If not set, this parameter is empty by default (Means that any labels of the original pod are not retained, and the labels of the copied pods are empty.)
forkPodRetainLabels: []
# You can disable SSL certificate check when communicating with image registry by
# setting registrySkipTLSVerify to true.
registrySkipTLSVerify: false
# You can set the log level with the verbosity setting
verbosity : 0
(1) Failed to Pulling image "aylei/debug-agent:latest", how to fix it? Or What is the credential for pulling down image?
It might be your node did not have network access to dockerhub or it was rate-limited at that time, you can use kubectl describe po <pod-name> to find out the exact reason
How to fix PodFitsHostPorts?
Agent port claims a host port, so you can only run one agent-pod in a Node, and in your it is the one that suffering ImagePullBackoff
Thanks @aylei
Now I fixed the above issues. When I ran command "kubectl debug ecgateway-86d67cb79d-8vhnc", a new issue came up, any ideas for that? How to fix it.
Agent Pod info: [Name:debug-agent-pod-d0bebc4e-4d62-11eb-9f4f-dca904978b2f, Namespace:default, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-d0bebc4e-4d62-11eb-9f4f-dca904978b2f to run...
pod ecgateway-86d67cb79d-8vhnc PodIP 10.254.16.200, agentPodIP 10.16.36.245
wait for forward port to debug agent ready...
Forwarding from 127.0.0.1:10027 -> 10027
Forwarding from [::1]:10027 -> 10027
Handling connection for 10027
Start deleting agent pod ecgateway-86d67cb79d-8vhnc
end port-forward...
error execute remote, unable to upgrade connection: Failed to construct RuntimeManager. Error- only docker and containerd container runtimes are suppored right now
error: unable to upgrade connection: Failed to construct RuntimeManager. Error- only docker and containerd container runtimes are suppored right now
what's your container runtime?
Sorry, I don't know what is "container runtime", so I could not answer you. Because I use default configurations, I wish it works, unfortunately it failed. Is there any configurations that I need to change in order to fix it? I am sorry for spending too much your time, I am not sure if it is possible to have a video chat in order to solve this issue such as Webex or Zoom, I live in Canada.