kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

Forcing `kubectl log` to wait when container is creating

Open nalepae opened this issue 2 years ago • 12 comments

What would you like to be added:

When we do kubectl logs <pod> [-f] on a container in creation, we get this error message:

Error from server (BadRequest): container "<container>" in pod "<pod>" is waiting to start: [ContainerCreating, PodInitializing]

In this situation, in general we retry to launch the kubectl logs <pod> [-f] until the container is created.

A nice feature would be to have a the kubectl logs command to block until the container is created, so we run this command only one time.

Why is this needed: To simplify the life of the user.

nalepae avatar Jun 10 '22 07:06 nalepae

I guess you could use kubectl wait, then do kubectl logs after that.

Would something like this work?

kubectl apply -f foo.yaml && kubectl wait --for=condition=Ready pod/foo && kubectl logs foo

If so, you could turn that into a script and make it a plugin for convenience.

brianpursley avatar Jun 13 '22 18:06 brianpursley

See also https://github.com/kubernetes/kubernetes/issues/79547

brianpursley avatar Jun 15 '22 15:06 brianpursley

/triage accepted

Probably best to check if the container is "creating" (what ever the keyword is for it) when doing a kc logs -f and if it is we set up a wait loop, probably with a default timeout or something. Would likely need to support waiting for each container in separate contexts/goroutines so a pod with a bunch of containers doesn't sit around waiting for all the containers to become ready before outputting any logs.

mpuckett159 avatar Jun 22 '22 21:06 mpuckett159

I guess you could use kubectl wait, then do kubectl logs after that.

Would something like this work?

kubectl apply -f foo.yaml && kubectl wait --for=condition=Ready pod/foo && kubectl logs foo

If so, you could turn that into a script and make it a plugin for convenience.

Sometimes we want to follow 50 pods logs like this:

kubectl logs -l name=my-service-xxx --follow --max-log-requests 50

However, some pods are being creating and the whole requests will fail. We need to wait for several creating pods and follow the running pods in the mean time.

Fonger avatar Sep 06 '22 12:09 Fonger

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 05 '22 13:12 k8s-triage-robot

/remove-lifecycle stale

mpuckett159 avatar Dec 05 '22 21:12 mpuckett159

Any workaround?

DavidPerezIngeniero avatar Apr 05 '23 09:04 DavidPerezIngeniero

Looks like wait --for=condition=Ready doesn't work for jobs. Any workaround for this?

homm avatar May 04 '23 10:05 homm

@homm I went with a bit ugly while loop to solve this.

while true ; do kubectl logs job/foo -f 2>/dev/null && break || continue ; done

Note, that there is no timeout with this solution.

Filipoliko avatar May 04 '23 15:05 Filipoliko

@homm I went with a bit ugly while loop to solve this.

while true ; do kubectl logs job/foo -f 2>/dev/null && break || continue ; done

Note, that there is no timeout with this solution.

My solution (also ugly) when launching a container called sbt in a job called build:

filter="-l job-name=build"
for i in {1..70}; do
  case $(k get po $filter -o jsonpath='{.items[*].status.containerStatuses[?(@.name=="sbt")].state.terminated.reason}') in
    *OOMKilled*)
      echo SBT has been killed for low memory
      k get po | grep build-
      k get ev | grep build-
      k top no
      exit 1;;
    "Error")
      echo SBT has ended with errors
      k logs $filter -c sbt
      exit 1;;
  esac
  msj=$(k get po $filter -o jsonpath='{.items[*].status.containerStatuses[?(@.name=="sbt")].state}')
   case $msj in
    *terminated*)
      echo
      echo Detected container SBT has ended
      success=1
      break;;
    *waiting*)
      echo -n .
      sleep 2;;
    *running*)
      echo
      echo Detected container SBT under execution
      success=1
      break;;
    *)
      echo -n '*'
      sleep 2;;
   esac
done
[[ $success != 1 ]] && {
  echo $msj
  echo Too much waiting for container SBT to be created
  exit 1
}
k logs $filter -c sbt -f || {
  echo Fails getting logs
  exit 1;
}

DavidPerezIngeniero avatar May 04 '23 15:05 DavidPerezIngeniero

/assign

ankritisachan avatar Sep 15 '23 09:09 ankritisachan