kubectl
kubectl copied to clipboard
Inconsistent output for kubectl with -o json and without for pvc/pv
What happened: After deleting pvc/pv I am checking pvc/pv status and I see different output for kubectl get pv pvc-name -o json and kubectl get pv pvc-name What you expected to happen: Output should be the same.
██kubectl get pv pvc-e1588d5b-e939-490a-b7cc-eb48c3a74347 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-e1588d5b-e939-490a-b7cc-eb48c3a74347 15Gi RWO Delete Terminating default/pvc-vcp vcp 120m ██kubectl get pv pvc-e1588d5b-e939-490a-b7cc-eb48c3a74347 -o json { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "annotations": { "kubernetes.io/createdby": "vsphere-volume-dynamic-provisioner", "pv.kubernetes.io/bound-by-controller": "yes", "pv.kubernetes.io/provisioned-by": "kubernetes.io/vsphere-volume" }, "creationTimestamp": "2023-08-18T12:15:53Z", "deletionGracePeriodSeconds": 0, "deletionTimestamp": "2023-08-18T13:14:49Z", "finalizers": [ "kubernetes.io/pv-protection" ], "name": "pvc-e1588d5b-e939-490a-b7cc-eb48c3a74347", "resourceVersion": "13689821", "uid": "c407f53b-01cc-4b1f-a897-19ae8211b98b" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "capacity": { "storage": "15Gi" }, "claimRef": { "apiVersion": "v1", "kind": "PersistentVolumeClaim", "name": "pvc-vcp", "namespace": "default", "resourceVersion": "13679572", "uid": "e1588d5b-e939-490a-b7cc-eb48c3a74347" }, "persistentVolumeReclaimPolicy": "Delete", "storageClassName": "vcp", "volumeMode": "Filesystem", "vsphereVolume": { "fsType": "ext4", "volumePath": "[vsanDatastore] bd8b2463-3423-e41a-e2f5-bc97e1cbe040/clustera-dynamic-pvc-e1588d5b-e939-490a-b7cc-eb48c3a74347.vmdk" } }, "status": { "phase": "Bound" } }
Status should be Terminating
How to reproduce it (as minimally and precisely as possible): Create pvc/pv and example nginx deployment. Delete pvc/pv. PVC/PV will be in terminating state because volume has an attachment. -->
Environment:
- Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.4", GitCommit:"fa3d7990104d7c1f16943a67f11b154b71f6a132", GitTreeState:"clean", BuildDate:"2023-07-19T12:20:54Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17+vmware.1", GitCommit:"bc3f3c608032e00c0923cf1940e4f580b00fbf4d", GitTreeState:"clean", BuildDate:"2023-03-15T09:56:15Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
- OS (e.g:
cat /etc/os-release
):
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This has come up a few times in the past, and it is a quirk in how the table printer works for some resource types. The status you see in the table output and the status phase you see in the JSON output are not quite the same thing.
If DeletionTimestamp
has a value, it prints "Terminating" as the status, instead of the actual status phase.
If you look at the table printer code for PV, you can see where this happens. https://github.com/kubernetes/kubernetes/blob/370c85f5ab0b0bfc3b30f235ddb040c246b0e1ff/pkg/printers/internalversion/printers.go#L1888-L1891
PVC and Pod have similar code that has the same result.
This probably isn't something we would want to change in the table printer output, since people are used to seeing Terminating in the table output, and there isn't any other way to see that it has a deletion timestamp otherwise.
Technically the PV would still be bound, even though a DeletionTimestamp is set. Are you thinking that should not be the case? There are many various PV/PVC related issues on the k/k repo, so it is possible you are running into one of those.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten