controller-tools
controller-tools copied to clipboard
Generating CRD in go code
Hello :)
Thank you for the amazing work you guys are doing!
I have been using make manifests to generate my CRDs, and everything works perfectly fine.
However, I now need to generate the CustomResourceDefinition object directly in golang code, based on the struct I have defined and annotated in my _types.go file. Because I manages my cluster from go code using the k8s APIs and not using yaml files.
From browsing the code in this repo, I feel like the solution is there somewhere in the pkg/crd folder. However, this is a lot of code to browse through (Thanks again for the hard work).
Would you happen to have any doc on how to generate CRD inside go code, and not as a YAML file?
Any help is greatly appreciated! Thank you
Current workaround:
versions[idx].Schema = &extsv1.CustomResourceValidation{
OpenAPIV3Schema: &extsv1.JSONSchemaProps{
Properties: map[string]extsv1.JSONSchemaProps{},
XPreserveUnknownFields: &preserveUnknownFields,
Type: "object",
},
}
Which will generate a valid schema that does not do any validation. Which is not good enough for production environments
The struct that is in the _types.go file can be used to communicate with the cluster from Go code without using YAML files.
For example, if you have a Client (from controller-runtime) you can invoke client.Create(ctx, myStruct) to create a resource in the cluster.
Does that help?
Hi @Porges,
Thanks for the response, but I fear I might not have explained my problem properly. My problem is not interacting with the CRs in the cluster, but how to, using go code create the CRDs in the cluster.
When I tried to create a CRDs using go code, it complains about OpenAPIV3Schema being required. And to be honest, I don't want to write this struct manually (because I am lazy, and because it will be a pain to keep it up-to-date with the struct), when I am almost your controller-gen code can generate it from reading the struct files. (Because from what I gathered reading pkg/crd code, that's what you guys are doing in controller-gen)
So, the only solution I've found so far is the workaround showed above, where Im creating an empty OpenAPIV3Schema that will accept anything and everything, but that's not very typesafe and might lead to user errors while interacting with the cluster since I won't be able to rely on k8s validating the CRs being created/updated.
Thank you
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
what's the status on this one?
/reopen
@yuvDev-hub: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
In case anyone stumbles across this:
I was facing a similar issue on a project, ended up finding that sigs.k8s.io/controller-runtime/pkg/envtest has a handy-dandy InstallCRDs() function that reads CRD manifests from disk and creates them in the apiserver. This opens the door for a very hacky workaround that uses controller-gen to generate the types and CRD YAML files.
I'd still be interested in seeing code generation handle this, though! I have 0 experience with code generation and am slammed at work, but I may take a look :)