kubectl
kubectl copied to clipboard
additionalPrinterColumns for CRD's doesn't work in k8s 1.11 for columns with array data
Kubernetes version (use kubectl version
): 1.11 (server-side printing)
Environment:
- Cloud provider or hardware configuration: GKE
Bug Reproduce using the following steps:
- Create CRD:
crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: foos.example.istio.io
spec:
group: example.istio.io
names:
kind: Foo
listKind: FooList
plural: foos
singular: foo
scope: Namespaced
version: v1alpha1
additionalPrinterColumns:
- JSONPath: .spec.servers[*].hosts
name: hosts
type: string
$ kubectl apply -f crd.yaml
- Create an instance of the CRD:
crd-instance.yaml
apiVersion: example.istio.io/v1alpha1
kind: Foo
metadata:
name: foo0
spec:
servers:
- hosts:
- foo.example.com
- bar.example.com
- hosts:
- baz.example.com
$ kubectl apply -f crd-instance.yaml
- Print instance of CRD:
$ kubectl get foos foo0
NAME HOSTS
foo0 [foo.example.com bar.example.com]
EXPECTED
Notice that only the first array of hosts is printed (missing second array [baz.example.com]
). We would expect that both arrays would have been printed. If we specifically request the JSONPath, then it prints correctly.
$ kubectl get foos foo0 -o jsonpath='{.spec.servers[*].hosts}'
[foo.example.com bar.example.com] [baz.example.com]
created a PR: https://github.com/kubernetes/kubernetes/pull/67079
/kind bug /area kubectl /priority P2
/assign @nikhita
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
On Tue, Dec 25, 2018 at 6:44 AM fejta-bot [email protected] wrote:
Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta https://github.com/fejta. /lifecycle stale
— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubectl/issues/517#issuecomment-449782966, or mute the thread https://github.com/notifications/unsubscribe-auth/APXA0CWRC1GdVp2uJ9bZeNvhmJJI45Npks5u8XvsgaJpZM4VukCs .
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This is still happening with v1.16.8
obj.yaml
apiVersion: "k8s.josegomez.io/v1"
kind: StorageClusterPair
metadata:
name: primary-dr
spec:
remoteSites:
- name: dr1
- name: dr2
crd.yaml
[...]
additionalPrinterColumns:
- name: Sites
type: string
priority: 0
jsonPath: .spec.remoteSites[*].name
description: Remote sites enabled.
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
kubectl get storageclusterpair primary-dr
NAME SITES AGE
primary-dr dr1 6m35s
kubectl get storageclusterpair primary-dr -o jsonpath='{.spec.remoteSites[*].name}'
dr1 dr2
/reopen
@brianpursley: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten /assign
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
This still happens on 1.18
/remove-lifecycle rotten
I've narrowed this down to here.
I can take a stab at a fix but I'm not sure if this was intentional or not.
Maybe @smarterclayton or @sttts have insight?
/assign @seans3
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Ping
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Is this issue still relevant for the supported Kubernetes versions (v1.20-v1.22)?