kubectl
kubectl copied to clipboard
CustomColumnsPrinter prints address for fields with pointer types instead of the value
What happened:
I was using the kubectl module in my project and noticed that if server-side flattening and printing isn't used, the CustomColumnsPrinter printer prints the address for object fields that are pointer types instead of the actual value. For example, .spec.terminationGracePeriodSeconds in v1.Pod has the type *int64, and it's value gets printed as 0x140009abc30 instead of an integer such as42.
What you expected to happen:
I expect the CustomColumnsPrinter to handle pointer types correctly. If the value is nil, then <none> should be printed. Otherwise, the value of the dereferenced type should be printed.
How to reproduce it (as minimally and precisely as possible):
Write a program or test to generate fake Pod objects and print those objects with the CustomColumnsPrinter using the JSONPath for the .spec.terminationGracePeriodSeconds field.
Anything else we need to know?: Bug is here: https://github.com/kubernetes/kubectl/blob/f9b136324e012e7fd49c667cd6d4e0635cb8c39d/pkg/cmd/get/customcolumn.go#L253
Environment:
-
Kubernetes client and server versions (use
kubectl version): I'm using these modules: k8s.io/api, k8s.io/apimachinery, k8s.io/cli-runtime, k8s.io/client-go, and k8s.io/kubectl, all at version 0.23.3. -
Cloud provider or hardware configuration:
-
OS (e.g:
cat /etc/os-release):
Hi @evanmcclure - I see you referenced this from a commit. Do you plan on opening a PR?
/triage accepted /priority backlog
Definitely. I was waiting to see if the issue would be accepted. I'll prep the PR soon.
I'll try to get this in tomorrow.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.