kubectl-debug
kubectl-debug copied to clipboard
kubectl debug状态为CrashLoopBackOff的Pod报错
[root@sz-5-centos163 src]# kubectl debug finance-payment-kingdee-realname-cgi--test5-6f46ff966c-rm6dl -n finance --fork
error parsing configuration file: yaml: line 36: found unexpected end of streamWaiting for pod finance-payment-kingdee-realname-cgi--test5-6f46ff966c-rm6dl-067b70f9-9197-11e9-8494-00163e340364-debug to run... Error occurred while waiting for pod to run: pod ran to completion error: pod ran to completion
但是查看debug-agent pod已经是running。 请问这个是什么情况?
查看target Pod状态为PodFitsHostPorts kubectl describe pod日志 Status: Failed Reason: PodFitsHostPorts Message: Pod Predicate PodFitsHostPorts failed
The problem is that the forked pod ran to COMPLETE status instead of RUNNING.
How many containers in the target Pod?
查看target Pod状态为PodFitsHostPorts kubectl describe pod日志 Status: Failed Reason: PodFitsHostPorts Message: Pod Predicate PodFitsHostPorts failed
That make sense:
- The original pod (target pod) uses the host network, so does the forked pod;
- The forked pod is explicitly assigned to the same node of original pod in favor of environment consistency;
- The forked pod cannot run in the target node due to the collision of host port.
The problem is that the forked pod ran to COMPLETE status instead of RUNNING.
How many containers in the target Pod?
The problem is that the pod state of CrashLoopBackOff is definitely not possible to Running。 Our goal is not to find the reason for the CrashLoopBackOff state?
@kerven88 Yes, of course the goal is to find the reason of the CrashLoopBackOff state.
However, the strategy kubectl-debug
take is to fork a Pod and reproduce the issue in the new Pod. The command
of the new Pod will be replaced so the new Pod won't crash on start. This works in some scenarios but fails for specific scenarios like this.
As for this issue, the problems of the new Pod and the old Pod (target Pod) are actually different:
- The old Pod crash for some reason on start, where we want to find the reason;
- The new Pod cannot run in the target host because the host port has been used by the old Pod, so we cannot proceed on to reproduce the crash at the start time;
HostNetwork
should be special case and kubectl-debug
should have change the port of new Pod.
@kerven88 Yes, of course the goal is to find the reason of the CrashLoopBackOff state.
However, the strategy
kubectl-debug
take is to fork a Pod and reproduce the issue in the new Pod. Thecommand
of the new Pod will be replaced so the new Pod won't crash on start. This works in some scenarios but fails for specific scenarios like this.As for this issue, the problems of the new Pod and the old Pod (target Pod) are actually different:
- The old Pod crash for some reason on start, where we want to find the reason;
- The new Pod cannot run in the target host because the host port has been used by the old Pod, so we cannot proceed on to reproduce the crash at the start time;
HostNetwork
should be special case andkubectl-debug
should have change the port of new Pod.
That's right, how can I customize the port number of the new Pod?
@kerven88 This requires modification of code, hopefully I will submit a PR this weekend, or I can give some guidance if you are willing to work on this.
@kerven88 This requires modification of code, hopefully I will submit a PR this weekend, or I can give some guidance if you are willing to work on this.
Tks! Looking forward to your PR submitsion。