pipeline
pipeline copied to clipboard
Add structural OpenAPI schema to Tekton CRDs
Kubernetes 1.15 introduced structural OpenAPI schema which helps with validation, supporting kubectl explain and also building tools around it:
Structural schema should be added for all CRDs introduced by Tekton.
This PR is related: https://github.com/tektoncd/pipeline/pull/1179
Is this something that can be easily generated using existing tooling? I'm a bit worried we'll end up with structs and YAML validation out of sync, and I'm also a bit uncomfortable with maintaining code to generate one from the other. Is there something OpenAPI provides for this?
/kind feature /kind question
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Send feedback to tektoncd/plumbing.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
Send feedback to tektoncd/plumbing.
@tekton-robot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Send feedback to tektoncd/plumbing.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten /remove-lifecycle stale /reopen
@vdemeester: Reopened this issue.
In response to this:
/remove-lifecycle rotten /remove-lifecycle stale /reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
GitHub recently introduced this too https://github.blog/2020-07-27-introducing-githubs-openapi-description/
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Send feedback to tektoncd/plumbing.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/close
Send feedback to tektoncd/plumbing.
@tekton-robot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopenwith a justification. Mark the issue as fresh with/remove-lifecycle rottenwith a justification. If this issue should be exempted, mark the issue as frozen with/lifecycle frozenwith a justification./close
Send feedback to tektoncd/plumbing.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen /lifecycle frozen It would be still nice to be able to tackle this.
@vdemeester: Reopened this issue.
In response to this:
/reopen /lifecycle frozen It would be still nice to be able to tackle this.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
knative does this now, one try is available here : https://github.com/tektoncd/pipeline/compare/main...vdemeester:schema-gen. The limitation, as of today, is that it doesn't work with multiple version CRDs (such as ours), it panics..
I've started playing around with this, and narrowed down the panic to something thinking the package github.com/tektoncd/pipeline/pkg/apis/resource/v1alpha1 should have a kind Pipeline in it. I haven't yet figured out why the heck it's thinking that, though.
EDIT: Actually, that's just 'cos I copied in config/300-pipeline.yaml to config/300-resources/300-pipeline.yaml, not realizing that the config/300-resources/300-task.yaml had already been tweaked to remove the v1alpha1 version to work at all. So it's not Pipeline specific - it's any CRD in config/300-resources (where @vdemeester's branch is looking to pick up existing CRD definitions) with multiple versions. Hmmmmmmm.
I'm pretty sure there's something weird about our CRDs and/or our packages resulting in this...
EDIT: Ok, my best guess so far is that it's getting confused by the existence of both pipeline v1alpha1 and resource v1alpha1 with the same group (tekton.dev) and version but different packages, and is trying to find pipeline v1alpha1 stuff in the resource v1alpha1 package. It's storing CRDs from the directory we give it (i.e., config/300-resources/...) in a map with the key being group+kind, and then expecting everything with that key and the same version to be in the same package. I think.
EDIT AGAIN: My description of the problem may not be right, but I just tested and verified that if I just put config/300-task.yaml in config/300-resources/ and remove the v1alpha1 version from it, hack/update-schemas.sh works, but if I instead remove v1beta1 from 300-task.yaml, it fails. So it's not the multiple versions of a particular CRD that's the problem, it's definitely something with there being multiple v1alpha1 packages, because I can create the same failure by copying config/300-run.yaml (which only has a v1alpha1 version) in instead. I will continue digging tomorrow.
EDIT AGAIN: My description of the problem may not be right, but I just tested and verified that if I just put
config/300-task.yamlinconfig/300-resources/and remove thev1alpha1version from it,hack/update-schemas.shworks, but if I instead removev1beta1from300-task.yaml, it fails. So it's not the multiple versions of a particular CRD that's the problem, it's definitely something with there being multiplev1alpha1packages, because I can create the same failure by copyingconfig/300-run.yaml(which only has av1alpha1version) in instead. I will continue digging tomorrow.
Arg.. I don't like that 😝… Seems like it's a bit too "tightly" coupled to the assumption that you put all your type in the same package.. 😅
Yeah, I'm trying to figure out exactly where that assumption is made...
Ok, the panic is the same as in https://github.com/kubernetes-sigs/controller-tools/issues/624, though the scenario in that PR doesn't quite match ours...BUT! I have a tentative fix: https://github.com/abayer/controller-tools/commit/c8009f3483b75a66454a5b0ad89159ed40413c80 - I'm now working on a test case, then will open a PR to controller-tools. After that, I'll see what of the knative fork of controller-tools we actually need to see if I can turn that into a PR to controller-tools as well, and if it's too weird/specific to do that, I'll just push a branch to my fork that we can use.
PR opened for that fix - https://github.com/kubernetes-sigs/controller-tools/pull/627 - I'm now trying to determine if we need to filter some fields out, like the knative-specific branch does. I think at a minimum we need to only take the Spec field of PersistentVolumeClaim (used in the VolumeClaimTemplate field of WorkspaceBinding), since we definitely don't need the TypeMeta, ObjectMeta, or Status of PersistentVolumeClaim in the schema. The other corev1 types that could possibly need filtering are corev1.Container and corev1.Volume, but we need many fields in corev1.Volume, and I think we're ok with everything in corev1.Container.
So yeah, if we do need that filtering, we'll need a change to schemapatch to do what the knative-specific branch does in terms of allowing you to specify "only use these fields (or exclude these fields) in the schema for this type". I'll get going on that tomorrow.
/assign @abayer
Well, https://github.com/kubernetes-sigs/controller-tools/pull/627#issuecomment-931355857 - our layout is awfully nonstandard, and the not-unreasonable response from the maintainer is asking if we can restructure things rather than make a change to schemapatch. I think we could solve this by moving everything in pkg/apis/resource/v1alpha1 and pkg/apis/run/v1alpha1 into pkg/apis/pipeline/v1alpha1 - no changes to CRDs needed, just rearranging the packages. That said, I'm not sure what other possible ramifications could come out of doing that.
Started playing around with what would be involved in moving pkg/apis/resource/v1alpha1 and pkg/apis/run/v1alpha1 and...well, so far, I don't think it's viable without mucking up our types pretty thoroughly.
They are in separate packages because of « loop dependency ». We may want to go ahead and reorganize that by duplicating some code and make v1alpha1 and v1beta1 completely independent of each other (code wise) which could be beneficial for the future anyway 😇
OpenAPI schemas will also help with managing Tekton through Terraform - without them Terraform doesn't know how to ignore the release annotation in a taskrun which causes Terraform to constantly want to destroy and create the manifest.
I want to add that the OpenAPI schema also is really handy for users. I tend to use kubectl explain for CRD to understand what fields are available and what they mean. For Tekon CRs the result is rather uninformative.
$ kubectl explain tasks
KIND: Task
VERSION: tekton.dev/v1beta1
DESCRIPTION:
<empty>
Compare the above to one that has a OpenAPI schema. You can drill down into the spec as far as you like.
$ kubectl explain podmonitors
KIND: PodMonitor
VERSION: monitoring.coreos.com/v1
DESCRIPTION:
PodMonitor defines monitoring for a set of pods.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object> -required-
Specification of desired Pod selection for target discovery by Prometheus.
if we're not going to get common recipes for Tekton, we really need the autocomplete in our editors