autoscaler
autoscaler copied to clipboard
ClusterAutoscaler: Put APIs in a separate go module
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
I put APIs in a separate go module.
Which issue(s) this PR fixes:
Fixes #6307
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/assign @x13n
Also, it might be worth it to create a separate go module in https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/provisioningrequest/client as well.
@x13n WDYT?
I don't think it's worth maintaining 3 packages. Maybe the client and APIs can be in the same package?
I don't think it's worth maintaining 3 packages. Maybe the client and APIs can be in the same package?
It makes sense.
Through some investigation, I found that k8s.io/autoscaler/cluster-autoscaler/apis
pkg can not have vendor
separate from k8s.io/autoscaler/cluster-autoscaler
's one due to the following error:
@x13n Do we need to have vendor
for the k8s.io/autoscaler/cluster-autoscaler/apis
?
k8s.io/autoscaler/cluster-autoscaler/provisioningrequest/provreqwrapper
Error: provisioningrequest/provreqwrapper/wrapper.go:64:9: cannot use pr.v1Beta1PR.CreationTimestamp (variable of type "k8s.io/autoscaler/cluster-autoscaler/apis/vendor/k8s.io/apimachinery/pkg/apis/meta/v1".Time) as "k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1".Time value in return statement Error: provisioningrequest/provreqwrapper/wrapper.go:69:9: cannot use pr.v1Beta1PR (variable of type *v1beta1.ProvisioningRequest) as "k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime".Object value in return statement: *v1beta1.ProvisioningRequest does not implement "k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime".Object (wrong type for method DeepCopyObject) have DeepCopyObject() "k8s.io/autoscaler/cluster-autoscaler/apis/vendor/k8s.io/apimachinery/pkg/runtime".Object want DeepCopyObject() "k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime".Object Error: provisioningrequest/provreqwrapper/wrapper.go:85:9: cannot use pr.v1Beta1PR.UID (variable of type "k8s.io/autoscaler/cluster-autoscaler/apis/vendor/k8s.io/apimachinery/pkg/types".UID) as "k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/types".UID value in return statement Error: provisioningrequest/provreqwrapper/wrapper.go:90:9: cannot use pr.v1Beta1PR.Status.Conditions (variable of type []"k8s.io/autoscaler/cluster-autoscaler/apis/vendor/k8s.io/apimachinery/pkg/apis/meta/v1".Condition) as []"k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1".Condition value in return statement Error: provisioningrequest/provreqwrapper/wrapper.go:95:35: cannot use conditions (variable of type []"k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1".Condition) as []"k8s.io/autoscaler/cluster-autoscaler/apis/vendor/k8s.io/apimachinery/pkg/apis/meta/v1".Condition value in assignment
https://github.com/kubernetes/autoscaler/actions/runs/7267376813/job/19801050209?pr=6315
Once removing vendor for the k8s.io/autoscaler/cluster-autoscaler/apis
pkg, CI should work fine: https://github.com/kubernetes/autoscaler/actions/runs/7267869755/job/19802584596?pr=6315
@x13n friendly ping :)
I resolved conflicts.
Apologies for slow response.
/lgtm /approve
@kisieland FYI
/assign @mwielgus
for hack/for-go-proj.sh
I squashed old commits into one, and then I replaced alpha k8s libraries with the released ones.
@x13n @alculquicondor Please take another look.
go.mod LGTM, in the sense that it will be usable from Kueue. I'll left the rest to the owners :)
/assign @x13n @mwielgus
Hm... This makes me think about a couple of things:
- Versioning should be consistent between main CA module and the new API module. Specifically, the k8s dependency should use the same tag.
- There's no binary that will be released for the new module, but we should keep the deps in sync going forwards. Do we need any changes to https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/hack/update-vendor.sh for this?
- Since 1.29 was already released, I think we should use
v1.30.0-alpha.0
on the master branch.
Versioning should be consistent between main CA module and the new API module. Specifically, the k8s dependency should use the same tag. Since 1.29 was already released, I think we should use v1.30.0-alpha.0 on the master branch.
These two points are in conflict. Users of the CA APIs shouldn't have to depend on alpha k8s modules.
I think using 1.29.x in the APIs and 1.30.0-alpha.y would work in most scenarios. CA code would compile with 1.30, as that satisfies both dependencies.
There's no binary that will be released for the new module, but we should keep the deps in sync going forwards. Do we need any changes to https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/hack/update-vendor.sh for this?
Yes, we need an additional script or modified hack/update-vendor.sh
. I think another script would be better since the cluster-autoscaler/hack/apis
doesn't have vendor.
Versioning should be consistent between main CA module and the new API module. Specifically, the k8s dependency should use the same tag. Since 1.29 was already released, I think we should use v1.30.0-alpha.0 on the master branch.
These two points are in conflict. Users of the CA APIs shouldn't have to depend on alpha k8s modules.
I think using 1.29.x in the APIs and 1.30.0-alpha.y would work in most scenarios. CA code would compile with 1.30, as that satisfies both dependencies.
That makes sense, but will be inconsistent with tags unless we start releasing the API module independently from CA? Unless we follow something like this:
- master branch - API module uses latest k/k release deps, CA uses latest k/k alpha
- release branch 1.XX - both API module and CA use latest k/k patch: 1.XX.YY
Does that make sense to you? This should be documented in our release docs. CC @gjtempleton @MaciekPytel for thoughts.
Yes, we need an additional script or modified
hack/update-vendor.sh
. I think another script would be better since thecluster-autoscaler/hack/apis
doesn't have vendor.
Honestly I'm not sure if we really need CA to have vendor either, we could rename the script to update-deps.sh
so it doesn't imply vendoring.
That makes sense, but will be inconsistent with tags unless we start releasing the API module independently from CA? Unless we follow something like this:
master branch - API module uses latest k/k release deps, CA uses latest k/k alpha release branch 1.XX - both API module and CA use latest k/k patch: 1.XX.YY Does that make sense to you? This should be documented in our release docs. CC @gjtempleton @MaciekPytel for thoughts.
@x13n I imagined that this (autoscaler) repository would cut releases separately from cluster-autoscaler-1.x.y
like cluster-autoscaler/apis
. If this repo wouldn't cut dedicated releases, we can not import cluster-autoscaler/apis
to external projects using the tagging version due to Golang module restriction.
cc: @alculquicondor
Yes, we need an additional script or modified hack/update-vendor.sh. I think another script would be better since the cluster-autoscaler/hack/apis doesn't have vendor.
Honestly I'm not sure if we really need CA to have vendor either, we could rename the script to update-deps.sh so it doesn't imply vendoring.
I'm ok with either way. I'll try to modify the existing script.
If we need separate process for cutting releases of the new module, we need to document it as well. I was thinking it could be piggybacking on the existing CA release tags though. Why is this an issue? We have 1:1 mappings between CA minor, k8s minor and go version used - does that render the api module unusable? I'd think if it depends on a specific k/k version, it is ok to rely on Go version used by the k/k as well.
i haven't studied this PR deeply yet, but i agree with the thinking that it would be nice if these API pieces could be included from other projects that might be implementing common autoscaling components. thanks for the work here!
I was thinking it could be piggybacking on the existing CA release tags though. Why is this an issue
If we don't cut a release cluster-autoscaler/apis
, external projects can not import the k8s.io/autoscaler/cluster-autoscaler/apis
package using tagged version due to the following golang limitations:
https://go.dev/wiki/Modules#publishing-a-release
It means that the external project can import its module only way to use the commit hash like this:
module foo/bar
go 1.21
require (
k8s.io/autoscaler/cluster-autoscaler/apis v0.0.0-xxxxxxxxxxxxxxxx-xxxxxxxxxxxx
...
So, I think that cutting cluster-autoscaler/apis
tags would be better. @x13n WDYT?
Yes, that makes sense to me, thanks. While reading about this a bit more, I came across https://go.dev/wiki/Modules#faqs--multi-module-repositories (see Is it possible to add a module to a multi-module repository?
section). I think this is exactly the case we have here? Won't this PR lead to “ambiguous import” error when trying to use the new apis module?
The more I read about this, the more I wonder if it wouldn't be a better idea to put this module in a separate repo under kubernetes-sigs.
That has the disadvantage that PRs that need API updates will have to be split among 2 repositories. The best model is what we have for k/k with the staging folder, but that sounds like overkill for just one submodule.
We might be better off doing a submodule in the repo. It's just 1 or 2 extra steps when branching a new version.
Yeah, I agree trying to follow in k/k footsteps here would be an overkill. Submodule is ok, just need to follow these extra steps. I see right now CA repo depends on k8s.io/autoscaler/cluster-autoscaler/apis v0.0.0-00010101000000-000000000000
. What are the next steps after this PR is merged?
Yeah, I agree trying to follow in k/k footsteps here would be an overkill. Submodule is ok, just need to follow these extra steps. I see right now CA repo depends on
k8s.io/autoscaler/cluster-autoscaler/apis v0.0.0-00010101000000-000000000000
. What are the next steps after this PR is merged?
@x13n We need to work on https://github.com/kubernetes/autoscaler/pull/6315#issuecomment-1884726098 in this PR. Please give me some time.
I think that we don't have any steps after this PR is merged. Any concerns?
I have rebased. I'm still working on scripts.
@x13n I modified the update-deps.sh
. PTAL, thanks.
@x13n friendly ping :)
I have rebased this PR to resolve conflicts.