kubebuilder
kubebuilder copied to clipboard
RFE: Generate api sub-package, with (almost) no external dependencies.
What do you want to happen?
When generating api types with kubebuilder create api, it generates the corresponding Go types in ./api/v1.
However, it is unergonomic for external projects to import these types. Mostly because they need to resolve all the project's dependencies. This is especially awkward for projects that don't use controller-runtime, but it can also be annoying for projects that do, but have different versions.
The fix for this is relatively simple:
- Add a
go.modin./api, thus making it, as far as Go is concerned, a separate package - Add a corresponding
replacedirective in the rootgo.mod - Stop using
sigs.k8s.io/controller-runtime/pkg/schemein the generated api types, which is a very thin convenience package.
We can also consider using go 1.18's workspace mode, but it's not actually necessary in this case.
What did you do?
kubebuilder create api
What did you expect to see?
A standalone importable package. Something like
$ ls api
v1
go.mod
$ cat go.mod
...
replace (
github.com/sample/operator/api => ./api
)
Furthermore, I would like to see that api/go.mod has almost no external dependencies (i.e. controller-runtime).
What did you see instead? Under which circumstances?
No additional go.mod, etc
Extra Labels
No response
I'm happy to take a look at this -- changing the scaffolding for new projects is simple enough, but I'm not sure what should happen for existing projects.
We could also solve point 3 by publishing sigs.k8s.io/controller-runtime/pkg/scheme as a separate package internal to controller-runtime. That way API-consuming projects could still avoid importing the entire dependency tree.
HI @squeed,
Thank you for reaching out. WDYT about answering the following questions so that it might help us to better understand why and what you need.
#### What did you do?
<!-- A clear and concise description of the steps you took (or insert a code snippet). -->
#### What did you expect to see?
<!-- A clear and concise description of what you expected to happen (or insert a code snippet). -->
#### What did you see instead? Under which circumstances?
<!-- A clear and concise description of what ACTUALLY happened (or insert a code snippet). -->
Could you please provide this info?
@camilamacedo86 Updated the issue, thanks! Let me know how I can best be of help.
@camilamacedo86 Do we need to work on this? And if yes, is the approach described in the issue description reasonable for tackling it?
I also owe this issue an update but haven’t had time to write it up yet, sorry.
On May 2, 2022, at 18:48, Nikhil Sharma @.***> wrote:
@camilamacedo86 Do we need to work on this? And if yes, is the approach described in the issue description reasonable for tackling it?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.
@squeed So, could you please update the issue description as per https://github.com/kubernetes-sigs/kubebuilder/issues/2627#issuecomment-1106343414.
Hey guys, really looking forward to this, are you still working on this @squeed ? If not, would you mind if I take a look at this?
Feel free to look at that @jakobmoellersap. /assign @jakobmoellersap
/assign @jakobmoellersap
One of the first things I want to mention is that its probably a breaking change and thus could be added to the list of the necessary changes in the v4-alpha scaffolding in https://github.com/kubernetes-sigs/kubebuilder/issues/2547#issuecomment-1106337477
Hi @jakobmoellersap,
For this one, I guess that we need to change controller-tools. So, I think the first step is we figure out here what would need to be changed to allow it
Have you checked it out already? WDYT about creating a comment here explaining what needs to be changed. Or if you check that the changes are only in Kubebuilder and you have a draft PR for that also feel free to push then we can discuss the approach solution on it.
I think would be easier to discuss if is or is not a broken change after we figure out what needs to be changed and have a proposal for that. WDYT?
PS.: Note that currently go/v4-alpha is a composition of kustomize/v2-alpha and the same base language used to generate go/v3. We can generate the base language for go/v4 too (just adds a little more burn to keep maintained but at some point, we will need to do).
Hey there, I just did a first look and I think we have a few things that we need to solve:
- We need to adjust the existing scaffolding for go/v3 in
pkg/plugins/golang/v3/scaffolds/internal/templates/apias we need to introduce a go.mod file for the API folders. It gets a bit tricky here as I am not sure how we want to opinionate for multi-group APIs (should we make one go.mod or many) - We need to solve a lifecycle challenge: kubebuilder create api => that means we have to change the kubebuilder flow to add another scaffolding hook into
pkg/plugins/golang/v3/scaffolds/internal/templates/gomod.goas we need to add new require / replace statements later on (in the create api flow) for importing the API types. Otherwise the setup will break as the module is not imported in the controller module - we need to decide if we want to support workspaces for go1.18 or above
I will come up with a simple draft in kubebuilder next (if I can achieve it) or write here again if I find more issues.
Nevertheless I think currently the only realistic way is to create a new folder with pkg/plugins/golang/v4/ to not break anything for v3
EDIT:
Just a few updates: Im working on the scaffolding and think we can support everything purely in kubebuilder. One additional note here is that if we do not support go.work but use regular replaces, we would have to default out a version, in this case we could use a placeholder like 0.0.0. I think overall, workspaces are much cleaner though, so I will try this in the draft Im preparing
Since the additional effort with replace statements, I decided to use go.work workspaces for now. These have the major advantage of me not having to manually issue replace statements in the main modules. The disadvantage is that without releasing it properly, you will have to use go work sync instead of go mod tidy.
I decided to try and implement this as a backwards-compatible opt-in feature into v3 since it already supports go1.18 through a CLI flag as it was the easiest to integrate. Ideally we do this already in the init step though I think (and store that information in the PROJECT). That way, people will already be able to decide their setup during initialization.
c/c @varshaprasad96
Learnings out of various PRs:
- We do not want to support Mono-Repository style setups. Reasons are listed in https://github.com/kubernetes-sigs/kubebuilder/pull/2901#issuecomment-1245000591
- Instead it is encouraged to create multiple repositories in case you want to separately manage API Types and Controllers and also can release them differently
- If you still want to use mono-repo style setups, you will have considerable effort in maintaining
replacedirectives, release process and makefile commands so there is a probability you will break with eventual updates from upstream kubebuilder and other plugins, so you have to be careful there - We plan to introduce documentation for both aspects above and will write a reference on how to best split your kubebuilder project.
Hi @jakobmoellersap,
Just to share, what ihmo would help a lot of address requirements as this one would properly handle external apis. See: https://github.com/kubernetes-sigs/kubebuilder/issues/1999
Updated the labels accordingly since we convey that it would be better addressed via the docs. c/c @jakobmoellersap,
Just to get this right, you basically want to have a documentation on how to import external APIs from another repository via create API through a guide @camilamacedo86 ?
Hi @jakobmoellersap,
I think we could doc that, ideally to not hurt concepts like single responsibility if a project would like to use an API that is defined/owned by another project then, the recommendation would be to work with these Kinds/APis like external types similar we could create a reconciliation for a core type.
Also, we can link:
- The doc: https://github.com/kubernetes-sigs/kubebuilder/blob/master/docs/using_an_external_type.md
- The issue (where we would like to officially support this option via the tool for no longer required manual steps and tweaks), see: https://github.com/kubernetes-sigs/kubebuilder/issues/1999
Then, we could also describe that for some specific case scenarios such as where the user would like to have an enterprise and community version of the same solution (such as the motivation of this issue) so authors have been looking for ways to import one project to another so that is not required to re-implement the same API. For this scenario, would be possible to use the go workspaces and we could describe the steps/manual changes to work with sub-modules as you did in the PR: https://github.com/kubernetes-sigs/kubebuilder/pull/2901#issuecomment-1245000591. We can provide code examples and etc.
We could also add a note to warn that these changes will make it harder to keep the project updated with the latest versions and changes provided in the Kubebuilder scaffold since the process to re-scaffold the project with the lastest version and ensure that you add the code implementation on top would not be easily achievable. See: https://book.kubebuilder.io/migration/v2vsv3.html#project-customizations
I think it could be a new doc under the "Reference" section such as "Sub-Module Layout"
On top of that, would be great we have a new section under Reference with the content of : https://github.com/kubernetes-sigs/kubebuilder/blob/master/docs/using_an_external_type.md (if possible updated and with a note linking the issue for we officially support that)
WDYT?
That sounds great! I'll get to it and link the PR here.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
We can let it be closed and have one issue with finishing the doc: https://github.com/kubernetes-sigs/kubebuilder/pull/3055, since that was the solution agreed/conveyed so far.