python icon indicating copy to clipboard operation
python copied to clipboard

Kubernetes Python Client Returns "Running" Status for Terminating Pods Instead of "Terminating"

Open RichwithPoor opened this issue 6 months ago • 5 comments

When using list_namespaced_pod() or list_pod_for_all_namespaces() from the Python Kubernetes client, pods that are in the process of being deleted (showing "Terminating" status in kubectl) continue to report their status as "Running" until they completely disappear from the cluster.

What happened: When a pod is being deleted (kubectl delete pod ) kubectl get pods correctly shows status "Terminating" However, both list_namespaced_pod() and list_pod_for_all_namespaces() keep reporting the status as "Running" This persists until the pod is completely removed from the cluster

What you expected to happen: The Python client should return "Terminating" status when a pod is in the deletion process, matching kubectl behavior

Environment:

  • Kubernetes version (kubectl version): v1.31.0
  • OS (e.g., MacOS 10.13.6): Macbook 14.5 (23F79)
  • Python version (python --version) Python 3.12.2
  • Python client version (pip list | grep kubernetes) 32.0.1

RichwithPoor avatar May 26 '25 03:05 RichwithPoor

The response should come from the api-server. This client doesn't cache the status.

I wonder if this could be caused by using resource version when listing the pods, and the response in from a cached version in the api-server. Could you share more on how you make the call and how to reproduce the issue?

roycaihw avatar Jun 04 '25 20:06 roycaihw

/assign

p172913 avatar Jun 26 '25 16:06 p172913

v1 = client.CoreV1Api() pods = v1.list_namespaced_pod(namespace, label_selector=label_selector) pod_status_list = [{ "name": pod.metadata.name, "ip": pod.status.pod_ip, "status": pod.status.phase, "container_name": pod.spec.containers[0].name if pod.spec.containers else "None" } for pod in pods.items]

This is currently the case. When the pod enters the terminal, the fetch is still Running

RichwithPoor avatar Jun 27 '25 06:06 RichwithPoor

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 25 '25 06:09 k8s-triage-robot

/remove-lifecycle stale

p172913 avatar Sep 25 '25 10:09 p172913