controller-tools
controller-tools copied to clipboard
✨ Generation of typed apply clients
Generate Apply clients for use with server side apply. For built in types, a hard coded mapping is done to point to the client-go/applyconfiguration
types. For CRDs, we assume applyconfigurations live at <path-to-package>/ac
directory which should be the case if imported CRD packages also use our generator.
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
:memo: Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.
It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
- If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
- If you signed the CLA as a corporation, please sign in with your organization's credentials at https://identity.linuxfoundation.org/projects/cncf to be authorized.
- If you have done the above and are still having issues with the CLA being reported as unsigned, please log a ticket with the Linux Foundation Helpdesk: https://support.linuxfoundation.org/
- Should you encounter any issues with the Linux Foundation Helpdesk, send a message to the backup e-mail support address at: [email protected]
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/test all
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Is this still being considered? This would be very useful for controllers / operators that want to use SSA with their own APIs. I'd be happy to help to get this in if possible.
/reopen
@astefanutti Yes this is something we'd like to get in but I haven't had the time to work on it. Please let me know if you'd like to help out.
@Jefftree: Reopened this PR.
In response to this:
/reopen
@astefanutti Yes this is something we'd like to get in but I haven't had the time to work on it. Please let me know if you'd like to help out.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Jefftree
Once this PR has been reviewed and has the lgtm label, please assign pwittrock for approval by writing /assign @pwittrock
in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
/test pull-controller-tools-test-master
/cc @jmrodri
/retest
Could the recent work done by @Jefftree be given another round of review 🙏🏼?
How about this feature now?
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: alvaroaleman, Jefftree
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [alvaroaleman]
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
/test pull-controller-tools-test-master
Hi @Jefftree,
this one is not passing in the tests. Could you verify? It seems very close to get merged
Also, could please check the comments above and let us know if you could address everything?
👋🏼 It's seems it's in great shape. Could that awesome work be merged🙏🏼? It'd be great for users to adopt it, and eventually report any issues / improvements iteratively.
@fabriziopandini @sbueringer Given your experience with ServerSideApply, are you able to give a second look?
I will take a look probably tomorrow
In my backlog now, but will honestly take a while to get to it
sorry for the late feedback If I got this right this PR add server side apply support to generated clients for CRDs; In CAPI we use instead the "generic" controller runtime client. so I don't have valuable feedback to add on this PR
Hi @vincepri @alvaroaleman Is there anything else needed to get this merged?
Is the output of /ac configurable? Would it be preferable for that to be configurable? Is there an idiom already for where these are stored (Eg in client-go they live in an applyconfiguration folder, maybe we want to spell it out too?)
It is not, but that is a good point. Renaming to applyconfiguration instead of ac should make it more consistent.
Is it possible to inject additional imports? Eg if my API imports a field from another CRD I might need to import that CRDs apply configuration, you can do that with the upstream generator using --external-applyconfigurations, does this have a similar feature?
Not at the moment, I wanted to add that as a follow up once we show the e2e workflow with controller-tools and controller-runtime is fully operational. There is a hardcoded list for client-go imports (https://github.com/kubernetes-sigs/controller-tools/pull/536/files#diff-1d721a91e969bd3d00bab2faf3d9a45bacdf7fb1668d265baaa73779b2879ee1R47-R50), and that will be made extensible.
And my other question then relates to why this PR exists, I read through the description and comments and it's not clear to me why I would choose to use this applyconfiguration generator for my CRD over the upstream applyconfiguration generator. What about this implementation is easier/better than the upstream generator?
The upstream applyconfiguration generator only works for k/k built-in types and aggregated apiservers. client-go does not have static types for CRDs. CRDs are not supported by that generator and this PR is intended to mirror the behavior of the upstream generator for CRDs built through kubebuilder.
The upstream applyconfiguration generator only works for k/k built-in types and aggregated apiservers. client-go does not have static types for CRDs. CRDs are not supported by that generator and this PR is intended to mirror the behavior of the upstream generator for CRDs built through kubebuilder.
I don't think that's true. In OpenShift we have generated applyconfigurations for all CRDs using the upstream generator (here's an example of a generated configuration). The configuration of the generator on that repo was done here and it doesn't look to me like there's anything too magical in there that would mean this can't be done generically for other CRD types as well. There's a requirement to have an openapi.json
file but again, the upstream openapi generator can be used to generate that
Hi @alvaroaleman, @Jefftree, @JoelSpeed
Why did we added the /hold label? Have we anything else that should be addressed here? Could we add in the description a summary of its status? What was done? What is missing to get done? Have we any point that we do not convey?
I'm concerned that this isn't actually needed as I've seen (see my previous comment) effectively the same generated files using the upstream generator already. Awaiting @Jefftree's response on that
Ah I see. I guess users of client-go could still use the upstream generator for CRDs that follow the same API conventions as k/k. I'm glad it works for your use case. However, controllers using CRDs generated via kubebuilder will still need their own set of tools to interact with SSA. controller-{runtime/tools} CRDs have a different set of conventions and tags compared built-in types (eg the kubebuilder). client-go also automatically generates a typed client for interacting with ApplyConfigurations strictly through client-go, and the kubebuilder client (controller-runtime) will need its own set of tools for working with ApplyConfigurations.
@Jefftree I'm not sure we are communicating effectively over GitHub, would you be open to discussing this via Zoom so that I can explain my concerns? I feel like it will be easier to reach an understanding between us over zoom.
In the mean time, I originally asked what the motivation was for this PR. I'm going to try and answer my own question here.
In the last couple of hours (between some other stuff), I've taken the CronJob
example from this PR, and used the k8s.io/code-generator
tools openapi-gen
and applyconfigurations-gen
to generate apply configurations for the custom type that match (modulo that the code-generator tools split the generated content into multiple files) the zz_generated.applyconfigurations.go
file that the generator from this PR created.
So what's the difference? As I see it, Kubebuilder users today could generate the applyconfigurations using the upstream tooling. However, to do so, they first need to generate an openapi.json
file that contains OpenAPI schemas for all of the types they need, so, if they embed something from another API, they need to include that too.
No worries, you can generate this with a small amount of code and bash, but, once you have that openapi.json
, feeding the type into applyconfigurations-gen
with the following args (I just used an example API for now to demonstrate), generated the expected output (I can push this to a branch for comparison tomorrow)
applyconfigurations-gen \
--output-package=github.com/openshift/api/example/v1/applyconfiguration \
--input-dirs=github.com/openshift/api/example/v1 \
--go-header-file=./hack/empty.txt \
--openapi-schema=./openapi/openapi.json \
github.com/openshift/api/example/v1
So what's the benefit of this PR? Well, it seems like it's easier to use for an end user. You don't first have to generate the openapi schema for your types, that's an advantage.
What are the cons? In my eyes, code maintenance and duplication of effort. We could merge this, and then accept the maintenance burden of hosting a generator, because lets be honest, they are complex. The reason I count this as a negative is because strictly speaking, you can go use the code-generator
tools to achieve the same output. The duplication is also something to consider, we are creating two options for users, they can use upstream tools, or they can use our version, how will they know which they should choose when both could work?
I'm wondering given we are all in the same community, whether it would be better to invest effort into making the upstream applyconfiguratioon-gen
tool easier to use, either by wrapping it so that controller-tools handles the openapi schema part magically, or refactoring/contributing upstream to make it work how we want to, rather than splitting and maintaining two tools that generate the same output
Ah I see. I guess users of client-go could still use the upstream generator for CRDs that follow the same API conventions as k/k. I'm glad it works for your use case. However, controllers using CRDs generated via kubebuilder will still need their own set of tools to interact with SSA. controller-{runtime/tools} CRDs have a different set of conventions and tags compared built-in types (eg the kubebuilder). client-go also automatically generates a typed client for interacting with ApplyConfigurations strictly through client-go, and the kubebuilder client (controller-runtime) will need its own set of tools for working with ApplyConfigurations.
Finally a couple of notes in response to this. I'm not disagreeing that controller-runtime needs to update it's client to handle apply types, totally agree. But it will need to handle the core generated apply types in the same way it does CRD generated apply types, so my understanding is that they must follow the same pattern right?
Also, I think you made an assumption that OpenShift isn't using kubebuilder, but we are. All of our types use the same kubebuilder markers and the same controller-tools generators that the rest of the kubebuilder system do, we even use controller-runtime and kubebuilder for a number of our operators. This is why I was interested in raising the point, if we can make it work on the OpenShift APIs, then it should work on any Kubebuilder API
Are you suggesting that kubebuilder uses tools tools openapi-gen and applyconfigurations-gen + controller-gen? Can we not wrapper the "upstream" solutions and have them here to make it easier for users?
I'm suggesting we have a discussion about what we want to do, perhaps we can add a topic to next weeks kubebuilder sync? As I see it, I see a few options:
- Include scripts to run openapi-gen and applyconfiguration-gen
- use openapi-gen/applyconfiguration-gen as a library (possibly with some refactoring) and integrate into controller-gen
- Ignore code-generator tools and maintain our own version (this PR)