Confusion about supported Kubernetes versions
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
git-v1.33.0
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
v1.32.6
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops upgrade cluster --name XXXXX
5. What happened after the commands executed?
kops said:
W0826 12:16:37.623873 3025 upgrade_cluster.go:207] cluster version "1.32.6" is greater than the desired version "1.32.4"
No upgrade required
6. What did you expect to happen?
I expected kops to suggest upgrading to Kubernetes 1.33.4 (because kops is v1.33 so it should support Kubernetes 1.33, right?) or at least to 1.32.8 as the latest patch release of the Kubernetes version installed in the cluster. But why is 1.32.4 "the desired version" and by whom is it desired?
7. Please provide your cluster manifest.
Irrelevant
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
You see, I rely on https://kubernetes.io/releases/ to find out the latest Kubernetes versions, and so does our security team, and I am very much surprised that kOps has its own idea of "desired" versions. If kops takes its supported versions from https://github.com/kubernetes/kops/blob/master/channels/stable then probably this file is outdated or something?
Could we discuss this during the next week's office hours?
Looks like with kops 1.33.1 the situation has improved a bit?
$ kops version
Client version: 1.33.1 (git-v1.33.1)
$ kops upgrade cluster --name XXX
ITEM PROPERTY OLD NEW
Cluster KubernetesVersion 1.32.6 1.33.4
Must specify --yes to perform upgrade
$
Could we discuss this during the next week's office hours?
For sure, we can discuss this during office hours. kOps takes the recommendations from the stable channel. That one has to be updated regularly and it's not always our top priority, even though it should be. Ideally, there would be people contributing with such small changes. Alternatively, one can have a private channels file or ignore completely any recommendations and manually handle K8s upgrades.
Could we discuss this during the next week's office hours?
For sure, we can discuss this during office hours. kOps takes the recommendations from the stable channel. That one has to be updated regularly and it's not always our top priority, even though it should be. Ideally, there would be people contributing with such small changes. Alternatively, one can have a private channels file or ignore completely any recommendations and manually handle K8s upgrades.
Why has the situation improved after kops 1.33.1 release? Has someone also updated the channel? I did not find this in the kops 1.33.1 release notes.
Channels are always updated on the master branch, see https://github.com/kubernetes/kops/pull/17596.
Channels are always updated on the master branch, see #17596.
So does https://github.com/kubernetes/kops/pull/17596 basically close this issue?
Do you think I can be of any help to update the channel more timely? I am used to rely on the kops upgrade cluster command to select the version to upgrade to. It is strange though that the channels are not built automatically somehow from https://kubernetes.io/releases/
So does #17596 basically close this issue?
I would say so.
Do you think I can be of any help to update the channel more timely? I am used to rely on the
kops upgrade clustercommand to select the version to upgrade to. It is strange though that the channels are not built automatically somehow from https://kubernetes.io/releases/
Thanks, that would be extremely appreciated. We used to have more automation, but automated PRs have been disabled, for security reasons. We are working on other things that will improve this eventually, when ready.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale