kcp
kcp copied to clipboard
Add tests that kube types added as CRDs show the expected columns
We have custom code that makes sure that when CRDs are added for built-in kube types (deployments, services, pods, etc.), kubectl get <resource> returns the same columns as the user expects, by replacing the CRD table converter with the go code for the printers for the internal built-in types. We should add tests that ensure we don't regress.
Example - deployments - if we regress:
NAME AGE
kuard 104m
This is what's expected:
NAME READY UP-TO-DATE AVAILABLE AGE
kuard 0/1 1 0 3s
These can be both API and CLI tests.
cc @kasturinarra
ACK & we will add these tests, thanks !!
Added a test case in polarion and here is the link for the same https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-50878
@ncdc one question for the API tests would you want me to test the CRD additionalPrinterColumns ??
oc get crd catalogsources.operators.coreos.com -o yaml ...
- additionalPrinterColumns:
- description: The pretty name of the catalog jsonPath: .spec.displayName name: Display type: string
- description: The type of the catalog jsonPath: .spec.sourceType name: Type type: string
- description: The publisher of the catalog jsonPath: .spec.publisher name: Publisher type: string
- jsonPath: .metadata.creationTimestamp name: Age type: date
It's fine to test them, but it's not related to this code/issue
It's fine to test them, but it's not related to this code/issue
okay, i have added CLI test in polarion to ensure we get all the right columns , but how do we cover the api test ?
For all built in Kubernetes types that someone could import as a CRD, verify that kubectl get output between kcp & kube matches.
For all built in Kubernetes types that someone could import as a CRD, verify that
kubectl getoutput between kcp & kube matches.
ah, okay, thanks !!
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kcp-ci-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.