kubespray
kubespray copied to clipboard
Update components to latest version
Hello folks,
I think some components should be upgraded to latest available version:
- coredns: v1.9.3
- ~~metallb: v0.13.4~~
- registry: v2.8.1
- local-path-provisioner: v0.0.22
- local-volume-provisioner: v2.5.0
- vsphere cloud controller: v1.24.0
- vsphere syncer: v2.6.0
- vsphere csi attacher: v3.5.0
- vsphere csi controller: v2.6.0
- vsphere csi liveness probe: v2.7.0
- vsphere csi provisioner: v3.2.1
- vsphere csi resizer: v1.5.0
Let me know if I can help in some way.
Thanks, Marco
Regarding coredns, I prefer to stay on 1.8.6 till the next kubespray version which will support 1.25 (in which coredns is upgraded to 1.9.3) otherwise we already had a lot of issue with specifying a different version of coredns than kubeadm.
Metallb is underway in this pr https://github.com/kubernetes-sigs/kubespray/pull/9120
For the other components, nothing to add, you're good to go if you want to :rocket:
Regarding coredns, I prefer to stay on 1.8.6 till the next kubespray version which will support 1.25 (in which coredns is upgraded to 1.9.3) otherwise we already had a lot of issue with specifying a different version of coredns than kubeadm.
Metallb is underway in this pr #9120
For the other components, nothing to add, you're good to go if you want to rocket
Got it. After your answer I suggest to update coredns to latest 1.8.x version: v1.8.7
For other components, I will prepare separate a PR (one of each component) ASAP.
Do you agree?
Regarding coredns, I prefer to stay on 1.8.6 till the next kubespray version which will support 1.25 (in which coredns is upgraded to 1.9.3) otherwise we already had a lot of issue with specifying a different version of coredns than kubeadm. Metallb is underway in this pr #9120 For the other components, nothing to add, you're good to go if you want to rocket
Got it. After your answer I suggest to update coredns to latest 1.8.x version: v1.8.7
For other components, I will prepare separate a PR (one of each component) ASAP.
Do you agree?
Even if that does seem like nothing but as kubeadm bundle corefile-migration 1.0.14 which support only coredns up to 1.8.6 I'd recommend to not go with 1.8.7.
You can at least group by "vendor" but yes 👍
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.