cluster-api-provider-vsphere
cluster-api-provider-vsphere copied to clipboard
kubelet version is not respected by version set in Machine resource
/kind bug
What steps did you take and what happened:
Using the Ubuntu 18.04 Kubernetes v1.15.4 image, I wanted to quickly test Kubernetes v1.16.1. so I updated the version field in the Machine spec while still using the old VM template.
When I did this, the entire cluster was running v1.16.1 components except for the kubelet:
Machine:
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
labels:
cluster.x-k8s.io/cluster-name: target-cluster01
cluster.x-k8s.io/control-plane: "true"
name: target-cluster01-controlplane-0
namespace: default
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
name: target-cluster01-controlplane-0
namespace: default
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: VSphereMachine
name: target-cluster01-controlplane-0
namespace: default
version: 1.16.1
API Server version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T16:51:36Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Node version
$ kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
target-cluster01-controlplane-0 NotReady master 3m31s v1.15.4 <none> <none> Ubuntu 18.04.3 LTS 4.15.0-65-generic containerd://1.2.9
What did you expect to happen: The kubelet version is also running v1.16.1
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api-provider-vsphere version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
This isn’t a bug so much as by design. A kubelet can run multiple versions of K8s, there is no bug.
I agree downloading the machine images as OVAs and deploying from them would be nice, and I’ve had a branch for this forever, I just can’t find the time.
A kubelet can run multiple versions of K8s, there is no bug.
I think you mean a kubelet is compatible with multiple versions of K8s, but a user specifying a specific version will expect the kubelet to be that version as well
That’s just, like, your opinion man. But seriously, find the OVA issue and relate the two please. Or I will in the morning.
Also, I would call this an RFE, not a bug since nothing is broken.
Sounds good, thanks for clarifying @akutz
/assign
Reminder, update this and link it to the OVA issue.
Follow up with @fabriziopandini on clusterctl v2 many-to-one owner refs, because idea for implementing support for OVA import on demand would result in many-to-one relationships.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/lifecycle frozen /milestone next /priority important-longterm
Was there an OVA issue?
By design, the Kubernetes version for nodes in vSphere VMs created by CAPV depends on the template specified in the VSphereMachineTemplate used to create the Machine.
@yastij Is there anything we need to do about this? Unless there is a specific way to perform image look up based on the k8s version in the Machine, we are stuck with the Kubernetes version defined in the template.
/close as per @srm09 's comments.
@randomvariable: Closing this issue.
In response to this:
/close as per @srm09 's comments.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.