python icon indicating copy to clipboard operation
python copied to clipboard

read_namespaced_pod_log with follow=True hangs when there is a long gap between logs from pod

Open 0xdholleran opened this issue 5 years ago • 15 comments

What happened (please include outputs or screenshots): If I try to read the logs of a pod with the following:

from kubernetes import config, client
config.load_incluster_config()
k8s = client.CoreV1Api()
namespace = "default"
pod_name="watch-this"
logs = k8s.read_namespaced_pod_log(
    name=pod_name,
    namespace=namespace,
    follow=True,
    _preload_content=False,
)
for line in logs:
  print(line)

and there is a gap (say 5 minutes) between line1 and line2 of the log being written, then the function hangs after the first line

What you expected to happen:

the function should read all the logs as they are written from the watched pod and then exit once finished

How to reproduce it (as minimally and precisely as possible):

Create the following pod to watch:

apiVersion: v1
kind: Pod
metadata:
  name: watch-this
  namespace: default
spec:
  containers:
  - command:
    - python3
    - -u
    - -c
    - |
      from time import sleep
      for i in range(3):
        print(i)
        if i != 2:
          sleep(300)
    image: python:3.7.4-slim-stretch
    imagePullPolicy: IfNotPresent
    name: python
    resources: {}
    stdin: true
    stdinOnce: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  serviceAccountName: default
  terminationGracePeriodSeconds: 30

then watch the pod with the following:

kubectl -n default run -i -t test-python-kube-client --image=python:3.7.4-slim-stretch --restart=Never --command -- /bin/sh

pip install kubernetes==11.0.0
python
from kubernetes import config, client
config.load_incluster_config()
k8s = client.CoreV1Api()
namespace = "default"
pod_name="watch-this"
logs = k8s.read_namespaced_pod_log(
    name=pod_name,
    namespace=namespace,
    follow=True,
    _preload_content=False,
)
for line in logs:
  print(line)

the output should get stuck like this: b'0\n'

however if I run the equivalent request using kubectl proxy and the kubernetes API, it works as expected:

kubectl proxy --port=8080 &
curl http://localhost:8080/api/v1/namespaces/default/pods/watch-this/log?follow=true
0
1
2

Anything else we need to know?:

Environment:

  • Kubernetes version (Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.13", GitCommit:"1da71a35d52fa82847fd61c3db20c4f95d283977", GitTreeState:"clean", BuildDate:"2020-07-15T21:59:26Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}):
  • OS ( MacOS 10.14.6):
  • Python version (3.7.4)
  • Python client version (kubernetes==11.0.0)

0xdholleran avatar Aug 18 '20 13:08 0xdholleran

/assign cc @palnabarun

roycaihw avatar Aug 31 '20 16:08 roycaihw

/assign

palnabarun avatar Aug 31 '20 16:08 palnabarun

You can use the watch util to stream logs. For example: https://github.com/kubernetes-client/python-base/blob/fd322f70aa6c33782ab466afd4eaae271a27cbcb/watch/watch_test.py#L83-L85

roycaihw avatar Sep 09 '20 18:09 roycaihw

hi @roycaihw, that doesn't work either. If I try to stream the logs of the pod defined above with:

from kubernetes import config, client, watch
config.load_incluster_config()
k8s = client.CoreV1Api()
namespace = "default"
pod_name="watch-this"
w = watch.Watch()
for e in w.stream(k8s.read_namespaced_pod_log, name=pod_name, namespace=namespace):
    print(e)

This doesn't work either. It also hangs after the first line

0xdholleran avatar Sep 10 '20 17:09 0xdholleran

I have faced the same problem this last week. After spending a fair amount of time investigating, I came to the conclusion the problem is with the TCP Keep Alive socket option, which urllib3 doesn't set by default. So, while running a long http request, the server doesn't get any tcp keep alive probes from the client and decides to close the connection without notifying the client, which then hangs waiting for the complete response from the server. These socket options worked for me (the times chosen are only for demonstration):

socket_options= [
    (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1),
    (socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 5),
    (socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 30),
    (socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 3)
]

Socket options should be set in the urllib3.PoolManager class here: https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L106

Another side effect of running long tcp connections can be seen in #1158 . Basically, connections used by the PoolManager are never closed and therefore are unusable for subsequently requests. This also makes the client hangs if it tries to make another request (read_namespaced_pod, for example) after a long running one. A dirty hack is to call rest_client.pool_manager.clear() after a long running request.

raphael-bos avatar Sep 20 '20 15:09 raphael-bos

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Dec 19 '20 16:12 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Jan 18 '21 16:01 fejta-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

fejta-bot avatar Feb 17 '21 17:02 fejta-bot

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 17 '21 17:02 k8s-ci-robot

/reopen

mrpowerus avatar Apr 15 '21 19:04 mrpowerus

@mrpowerus: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 15 '21 19:04 k8s-ci-robot

/reopen

Ark-kun avatar Nov 11 '22 09:11 Ark-kun

@Ark-kun: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 11 '22 09:11 k8s-ci-robot

/reopen

whybeyoung avatar May 31 '24 02:05 whybeyoung

@whybeyoung: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar May 31 '24 02:05 k8s-ci-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jul 03 '24 05:07 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jul 03 '24 05:07 k8s-ci-robot

/reopen

SaladBreaker avatar Oct 17 '25 10:10 SaladBreaker

@SaladBreaker: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Oct 17 '25 10:10 k8s-ci-robot