kubectl
kubectl copied to clipboard
Forcing `kubectl log` to wait when container is creating
What would you like to be added:
When we do kubectl logs <pod> [-f]
on a container in creation, we get this error message:
Error from server (BadRequest): container "<container>" in pod "<pod>" is waiting to start: [ContainerCreating, PodInitializing]
In this situation, in general we retry to launch the kubectl logs <pod> [-f]
until the container is created.
A nice feature would be to have a the kubectl logs
command to block until the container is created, so we run this command only one time.
Why is this needed: To simplify the life of the user.
I guess you could use kubectl wait, then do kubectl logs after that.
Would something like this work?
kubectl apply -f foo.yaml && kubectl wait --for=condition=Ready pod/foo && kubectl logs foo
If so, you could turn that into a script and make it a plugin for convenience.
See also https://github.com/kubernetes/kubernetes/issues/79547
/triage accepted
Probably best to check if the container is "creating" (what ever the keyword is for it) when doing a kc logs -f and if it is we set up a wait loop, probably with a default timeout or something. Would likely need to support waiting for each container in separate contexts/goroutines so a pod with a bunch of containers doesn't sit around waiting for all the containers to become ready before outputting any logs.
I guess you could use kubectl wait, then do kubectl logs after that.
Would something like this work?
kubectl apply -f foo.yaml && kubectl wait --for=condition=Ready pod/foo && kubectl logs foo
If so, you could turn that into a script and make it a plugin for convenience.
Sometimes we want to follow 50 pods logs like this:
kubectl logs -l name=my-service-xxx --follow --max-log-requests 50
However, some pods are being creating and the whole requests will fail. We need to wait for several creating pods and follow the running pods in the mean time.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Any workaround?
Looks like wait --for=condition=Ready
doesn't work for jobs. Any workaround for this?
@homm I went with a bit ugly while loop to solve this.
while true ; do kubectl logs job/foo -f 2>/dev/null && break || continue ; done
Note, that there is no timeout with this solution.
@homm I went with a bit ugly while loop to solve this.
while true ; do kubectl logs job/foo -f 2>/dev/null && break || continue ; done
Note, that there is no timeout with this solution.
My solution (also ugly) when launching a container called sbt
in a job called build
:
filter="-l job-name=build"
for i in {1..70}; do
case $(k get po $filter -o jsonpath='{.items[*].status.containerStatuses[?(@.name=="sbt")].state.terminated.reason}') in
*OOMKilled*)
echo SBT has been killed for low memory
k get po | grep build-
k get ev | grep build-
k top no
exit 1;;
"Error")
echo SBT has ended with errors
k logs $filter -c sbt
exit 1;;
esac
msj=$(k get po $filter -o jsonpath='{.items[*].status.containerStatuses[?(@.name=="sbt")].state}')
case $msj in
*terminated*)
echo
echo Detected container SBT has ended
success=1
break;;
*waiting*)
echo -n .
sleep 2;;
*running*)
echo
echo Detected container SBT under execution
success=1
break;;
*)
echo -n '*'
sleep 2;;
esac
done
[[ $success != 1 ]] && {
echo $msj
echo Too much waiting for container SBT to be created
exit 1
}
k logs $filter -c sbt -f || {
echo Fails getting logs
exit 1;
}
/assign