kubectl-debug icon indicating copy to clipboard operation
kubectl-debug copied to clipboard

kubectl debug状态为CrashLoopBackOff的Pod报错

Open kerven88 opened this issue 5 years ago • 8 comments

[root@sz-5-centos163 src]# kubectl debug finance-payment-kingdee-realname-cgi--test5-6f46ff966c-rm6dl -n finance --fork

error parsing configuration file: yaml: line 36: found unexpected end of streamWaiting for pod finance-payment-kingdee-realname-cgi--test5-6f46ff966c-rm6dl-067b70f9-9197-11e9-8494-00163e340364-debug to run... Error occurred while waiting for pod to run: pod ran to completion error: pod ran to completion

但是查看debug-agent pod已经是running。 请问这个是什么情况?

kerven88 avatar Jun 18 '19 07:06 kerven88

查看target Pod状态为PodFitsHostPorts kubectl describe pod日志 Status: Failed Reason: PodFitsHostPorts Message: Pod Predicate PodFitsHostPorts failed

kerven88 avatar Jun 18 '19 07:06 kerven88

The problem is that the forked pod ran to COMPLETE status instead of RUNNING.

How many containers in the target Pod?

aylei avatar Jun 18 '19 08:06 aylei

查看target Pod状态为PodFitsHostPorts kubectl describe pod日志 Status: Failed Reason: PodFitsHostPorts Message: Pod Predicate PodFitsHostPorts failed

That make sense:

  1. The original pod (target pod) uses the host network, so does the forked pod;
  2. The forked pod is explicitly assigned to the same node of original pod in favor of environment consistency;
  3. The forked pod cannot run in the target node due to the collision of host port.

aylei avatar Jun 18 '19 08:06 aylei

The problem is that the forked pod ran to COMPLETE status instead of RUNNING.

How many containers in the target Pod?

The problem is that the pod state of CrashLoopBackOff is definitely not possible to Running。 Our goal is not to find the reason for the CrashLoopBackOff state?

kerven88 avatar Jun 27 '19 01:06 kerven88

@kerven88 Yes, of course the goal is to find the reason of the CrashLoopBackOff state.

However, the strategy kubectl-debug take is to fork a Pod and reproduce the issue in the new Pod. The command of the new Pod will be replaced so the new Pod won't crash on start. This works in some scenarios but fails for specific scenarios like this.

As for this issue, the problems of the new Pod and the old Pod (target Pod) are actually different:

  • The old Pod crash for some reason on start, where we want to find the reason;
  • The new Pod cannot run in the target host because the host port has been used by the old Pod, so we cannot proceed on to reproduce the crash at the start time;

HostNetwork should be special case and kubectl-debug should have change the port of new Pod.

aylei avatar Jun 27 '19 02:06 aylei

@kerven88 Yes, of course the goal is to find the reason of the CrashLoopBackOff state.

However, the strategy kubectl-debug take is to fork a Pod and reproduce the issue in the new Pod. The command of the new Pod will be replaced so the new Pod won't crash on start. This works in some scenarios but fails for specific scenarios like this.

As for this issue, the problems of the new Pod and the old Pod (target Pod) are actually different:

  • The old Pod crash for some reason on start, where we want to find the reason;
  • The new Pod cannot run in the target host because the host port has been used by the old Pod, so we cannot proceed on to reproduce the crash at the start time;

HostNetwork should be special case and kubectl-debug should have change the port of new Pod.

That's right, how can I customize the port number of the new Pod?

kerven88 avatar Jun 27 '19 06:06 kerven88

@kerven88 This requires modification of code, hopefully I will submit a PR this weekend, or I can give some guidance if you are willing to work on this.

aylei avatar Jun 27 '19 14:06 aylei

@kerven88 This requires modification of code, hopefully I will submit a PR this weekend, or I can give some guidance if you are willing to work on this.

Tks! Looking forward to your PR submitsion。

kerven88 avatar Jun 28 '19 00:06 kerven88