kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

additionalPrinterColumns for CRD's doesn't work in k8s 1.11 for columns with array data

Open seans3 opened this issue 6 years ago • 38 comments

Kubernetes version (use kubectl version): 1.11 (server-side printing)

Environment:

  • Cloud provider or hardware configuration: GKE

Bug Reproduce using the following steps:

  1. Create CRD:

crd.yaml

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: foos.example.istio.io
spec:
  group: example.istio.io
  names:
    kind: Foo
    listKind: FooList
    plural: foos
    singular: foo
  scope: Namespaced
  version: v1alpha1
  additionalPrinterColumns:
  - JSONPath: .spec.servers[*].hosts
    name: hosts
    type: string

$ kubectl apply -f crd.yaml
  1. Create an instance of the CRD:

crd-instance.yaml

apiVersion: example.istio.io/v1alpha1
kind: Foo
metadata:
  name: foo0
spec:
  servers:
  - hosts:
    - foo.example.com
    - bar.example.com
  - hosts:
    - baz.example.com
$ kubectl apply -f crd-instance.yaml
  1. Print instance of CRD:
$ kubectl get foos foo0
NAME      HOSTS
foo0      [foo.example.com bar.example.com]

EXPECTED

Notice that only the first array of hosts is printed (missing second array [baz.example.com]). We would expect that both arrays would have been printed. If we specifically request the JSONPath, then it prints correctly.

$ kubectl get foos foo0 -o jsonpath='{.spec.servers[*].hosts}'
[foo.example.com bar.example.com] [baz.example.com]

seans3 avatar Aug 03 '18 20:08 seans3

created a PR: https://github.com/kubernetes/kubernetes/pull/67079

nikhita avatar Aug 07 '18 13:08 nikhita

/kind bug /area kubectl /priority P2

seans3 avatar Sep 26 '18 00:09 seans3

/assign @nikhita

seans3 avatar Sep 26 '18 00:09 seans3

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Dec 25 '18 01:12 fejta-bot

/remove-lifecycle stale

On Tue, Dec 25, 2018 at 6:44 AM fejta-bot [email protected] wrote:

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta https://github.com/fejta. /lifecycle stale

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubectl/issues/517#issuecomment-449782966, or mute the thread https://github.com/notifications/unsubscribe-auth/APXA0CWRC1GdVp2uJ9bZeNvhmJJI45Npks5u8XvsgaJpZM4VukCs .

nikhita avatar Dec 25 '18 10:12 nikhita

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Mar 25 '19 11:03 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Apr 24 '19 11:04 fejta-bot

/remove-lifecycle rotten

seans3 avatar Apr 24 '19 15:04 seans3

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jul 23 '19 15:07 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Aug 22 '19 16:08 fejta-bot

/remove-lifecycle rotten

seans3 avatar Aug 22 '19 17:08 seans3

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

fejta-bot avatar Sep 21 '19 17:09 fejta-bot

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 21 '19 17:09 k8s-ci-robot

This is still happening with v1.16.8

obj.yaml

apiVersion: "k8s.josegomez.io/v1"
kind: StorageClusterPair
metadata:
  name: primary-dr
spec:
  remoteSites:
    - name: dr1
    - name: dr2

crd.yaml

[...]
      additionalPrinterColumns:
        - name: Sites
          type: string
          priority: 0
          jsonPath: .spec.remoteSites[*].name
          description: Remote sites enabled.
        - name: Age
          type: date
          jsonPath: .metadata.creationTimestamp

kubectl get storageclusterpair primary-dr

NAME         SITES   AGE
primary-dr   dr1     6m35s

kubectl get storageclusterpair primary-dr -o jsonpath='{.spec.remoteSites[*].name}'

dr1 dr2

pipoe2h avatar Apr 26 '20 13:04 pipoe2h

/reopen

brianpursley avatar Apr 26 '20 14:04 brianpursley

@brianpursley: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 26 '20 14:04 k8s-ci-robot

/remove-lifecycle rotten /assign

brianpursley avatar Apr 27 '20 18:04 brianpursley

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Sep 13 '20 05:09 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Oct 13 '20 05:10 fejta-bot

This still happens on 1.18

howardjohn avatar Nov 10 '20 17:11 howardjohn

/remove-lifecycle rotten

eddiezane avatar Nov 10 '20 18:11 eddiezane

I've narrowed this down to here.

I can take a stab at a fix but I'm not sure if this was intentional or not.

Maybe @smarterclayton or @sttts have insight?

eddiezane avatar Nov 11 '20 01:11 eddiezane

/assign @seans3

eddiezane avatar Nov 11 '20 17:11 eddiezane

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Feb 09 '21 18:02 fejta-bot

/remove-lifecycle stale

Ping

howardjohn avatar Feb 09 '21 18:02 howardjohn

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar May 10 '21 19:05 fejta-bot

/remove-lifecycle stale

eddiezane avatar May 12 '21 22:05 eddiezane

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 10 '21 22:08 k8s-triage-robot

/remove-lifecycle stale

amirschw avatar Aug 12 '21 15:08 amirschw

Is this issue still relevant for the supported Kubernetes versions (v1.20-v1.22)?

palnabarun avatar Sep 08 '21 10:09 palnabarun