autoscaler
autoscaler copied to clipboard
Bump the go version to v1.23.2 for the GitHub workflows.
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Bump the go version under GitHub workflow to 1.23.2, as Dockerfile and go.mod uses go version 1.23.2
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
NONE
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
Hi @gjtempleton @MaciekPytel PTAL!
Question for a future improvement: Worth scripting something up to keep these versions in sync whenever we touch one of them? (Potentially an extension to the existing update-deps.sh?)
We also have one CI pipeline for all the projects under this repo, current drift at the head of the default branch is only one patch release, but we could be drifting further in the future - any risks here?
Question for a future improvement: Worth scripting something up to keep these versions in sync whenever we touch one of them? (Potentially an extension to the existing update-deps.sh?)
@gjtempleton, That sounds good. We can script this part so that the go version under ci.yaml will also be updated when we update other dependencies.
We also have one CI pipeline for all the projects under this repo, current drift at the head of the default branch is only one patch release, but we could be drifting further in the future - any risks here?
IMO, there is no risk, as projects used the go version corresponding to the latest k8s version.
VPA: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/go.mod
@gjtempleton your thoughts?
We also have one CI pipeline for all the projects under this repo, current drift at the head of the default branch is only one patch release, but we could be drifting further in the future - any risks here?
IMO, there is no risk, as projects used the go version corresponding to the latest k8s version.
Is this a requirement? Is there value in the projects in this repo matching the k/k go version?
We also have one CI pipeline for all the projects under this repo, current drift at the head of the default branch is only one patch release, but we could be drifting further in the future - any risks here?
IMO, there is no risk, as projects used the go version corresponding to the latest k8s version.
Is this a requirement? Is there value in the projects in this repo matching the k/k go version?
Talking about CA, For every CA release, we update the corresponding upstream dependencies(k8s), so CA version and Kubernetes version have a one-to-one correspondence. I think it is with VPA also, There was some talk of improvement in this area in vpa (#5759)
Talking about CA, For every CA release, we update the corresponding upstream dependencies(k8s), so CA version and Kubernetes version have a one-to-one correspondence.
What is the reason for this? Is there a need to have a one-to-one correspondence?
Hi @adrianmoisey CA imports a huge chunk of internal k8s code because it calls out to the scheduler implementation. Therefore, to avoid version incompatibilities we keep the set of libraries used in CA as used by k8s.
cc @gjtempleton, Can we merge this PR? Later we can add the script for it that you suggested above.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: jackfrancis, Shubham82 Once this PR has been reviewed and has the lgtm label, please assign mwielgus for approval. For more information see the Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
hi @gjtempleton, Could you approve this PR?
hi @gjtempleton, Could you approve this PR?
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
these files have been synchronized to 1.24.0, see:
- https://github.com/kubernetes/autoscaler/pull/8192
- https://github.com/kubernetes/autoscaler/pull/8121