k8ssandra-operator
k8ssandra-operator copied to clipboard
Upgrading Operator from 1.10.2 to 1.10.3 if you have a cluster running will throw fatal error
What happened?
When upgrading the operator from 1.10.2 to 1.10.3 it will start a rolling upgrade of your cassandra yaml configuration, but this will break the cluster as it sets the cluster name (which was broken in 1.10.2) and you will get this error: "Saved cluster name Test Cluster != configured name
Did you expect to see something different? It should either try to resolve it from the operator side or not update on existing clusters
How to reproduce it (as minimally and precisely as possible): Get a cluster upp and running with 1.10.2, then upgrade the operator to 1.10.3
Environment Kubernetes 1.27.7 on AKS
-
K8ssandra Operator version: 1.10.3
-
Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"clean", BuildDate:"2023-04-14T13:14:41Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.7", GitCommit:"07a61d861519c45ef5c89bc22dda289328f29343", GitTreeState:"clean", BuildDate:"2023-10-19T00:14:21Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}
-
Kubernetes cluster kind:
insert how you created your cluster: kops, bootkube, etc.
-
Manifests:
insert manifests relevant to the issue
- K8ssandra Operator Logs:
insert K8ssandra Operator logs relevant to the issue here
Anything else we need to know?:
Hi, is the cluster using Cassandra 4.1.x ?
If so, there was indeed a bug in previous versions, and you need to put the "Test Cluster" cluster name explicitly in .spec.cassandra.clusterName
so that it doesn't try to update the name to what it should be, but play nice with the former bug.