Valid container name info needed in kubectl logs/exec output
What would you like to be added?
Add container name info in kubectl logs xxx and kubectl exec xxx output, when there are multiple containers in a pod. Currently, there is no valid container name info in the output.
Current:
kubectl logs test-pod -c abc
error: container abc is not valid for pod test-pod
kubectl exec -it test-pod -c abc -- sh
Error from server (BadRequest): container abc is not valid for pod test-pod
Expected:
kubectl logs test-pod -c abc
error: container abc is not valid for pod test-pod out of: main-0, main-1, main-2, init-0 (init), init-1 (init)
kubectl exec -it test-pod -c abc -- sh
Error from server (BadRequest): container abc is not valid for pod test-pod out of: main-0, main-1, main-2, init-0 (init), init-1 (init)
Why is this needed?
When a pod with multiple containers, there is no valid container name info in the kubectl logs xxx and kubectl exec xxx output, need to add containers name info to choose easily.
Test yaml:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
initContainers:
- name: init-0
image: busybox
command:
- echo
- msg from init-0
- name: init-1
image: busybox
command:
- echo
- msg from init-1
containers:
- name: main-0
image: busybox
command:
- /bin/sh
- -c
- sleep 3600
- name: main-1
image: busybox
command:
- /bin/sh
- -c
- sleep 3600
- name: main-2
image: busybox
command:
- /bin/sh
- -c
- sleep 3600
kubectl apply -f test-pod.yaml, then:
-
kubectl logs test-pod -c abc -
kubectl exec -it test-pod -c abc -- sh
will reproduce this issue.
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/sig cli
/transfer kubectl
@astraw99 during our bug scrup meeting, we were not clear about the actual problem in here. Would you please elaborate that you want the containers in error messages or the output (which container is used during the exec/logs)?
@astraw99 during our bug scrup meeting, we were not clear about the actual problem in here. Would you please elaborate that you want the containers in error messages or the output (which container is used during the exec/logs)?
@ardaguclu When a pod with multiple containers, there is no valid container name info in the kubectl logs xxx and kubectl exec xxx err message output, need to add containers name info to make the err message more friendly.
More info please see the issue description above.
In my opinion this will be useful addition. Let's wait to see the opinions from other folks.
I think this would be a valuable improvement to the kubectl logs and kubectl exec commands, especially in multi-container Pod scenarios. Without knowing the valid container names, I had to run: kubectl describe pod my-pod just to find the correct names. This adds extra steps and slows down troubleshooting, especially when debugging in production.
/triage accepted