cockroach-operator
cockroach-operator copied to clipboard
Allow k8s operator users to add labels to existing deployments
Adding a label to an existing operator deployment is not currently supported by our operator. It appears that adding a label, has been deemed restricted by the operator, reason this fails. Adding labels should not be considered a restricted op, however, according to the logs, this is forbidden:
spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
Steps to replicate:
The cluster below was deployed using the default operator and example.yml
file. The additionalLabels
line was present initially when the cluster was first deployed, so the label is applied correctly. Snippet below shows the line for the label and it is correctly applied:
..
...
....
..... snip ....
image:
name: cockroachdb/cockroach:v21.2.8
# nodes refers to the number of crdb pods that are created
# via the statefulset
nodes: 3
additionalLabels:
crdb: test-operator-deployment
pod details, label is applied as expected and pods are ready
kubectl get pods -n cockroach-operator-system -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
cluster-init-ndl5z 1/1 Running 0 2m44s 10.244.2.11 lab-kub03 <none> <none> controller-uid=0636d470-26da-40b0-a2db-ac4a2c67d847,job-name=cluster-init
cockroach-operator-manager-df946bb6b-gfdzl 1/1 Running 0 3m 10.244.1.223 lab-kub02 <none> <none> app=cockroach-operator,pod-template-hash=df946bb6b
cockroachdb-0 1/1 Running 0 2m45s 10.244.2.13 lab-kub03 <none> <none> app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-84ccf97766,crdb=test-operator-deployment,statefulset.kubernetes.io/pod-name=cockroachdb-0
cockroachdb-1 1/1 Running 0 2m45s 10.244.1.225 lab-kub02 <none> <none> app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-84ccf97766,crdb=test-operator-deployment,statefulset.kubernetes.io/pod-name=cockroachdb-1
cockroachdb-2 1/1 Running 0 2m45s 10.244.2.14 lab-kub03 <none> <none> app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-84ccf97766,crdb=test-operator-deployment,statefulset.kubernetes.io/pod-name=cockroachdb-2
I then destroyed the cluster and commented out the section where the label is defined, so the pods would only have the standard default labels:
kubectl get pods -n $ns --show-labels
NAME READY STATUS RESTARTS AGE LABELS
cluster-init-v56ph 1/1 Running 0 64s controller-uid=1ab53c79-18c2-4506-a0d7-10aa9fcaa918,job-name=cluster-init
cockroach-operator-manager-df946bb6b-vvwhd 1/1 Running 0 80s app=cockroach-operator,pod-template-hash=df946bb6b
cockroachdb-0 1/1 Running 0 66s app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-6478ffd568,statefulset.kubernetes.io/pod-name=cockroachdb-0
cockroachdb-1 1/1 Running 0 66s app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-6478ffd568,statefulset.kubernetes.io/pod-name=cockroachdb-1
cockroachdb-2 1/1 Running 0 66s app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-6478ffd568,statefulset.kubernetes.io/pod-name=cockroachdb-2
enterprise-license-qcwc2 0/1 Completed 0 64s controller-uid=9c705366-b3e9-4c46-a595-b38fad521565,job-name=enterprise-license
As you can see above, all pods are running and the default labels were applied as expected.
I then modified the CrdbCluster
definition in the file example.yml
to include a new label, snippet of the pertaining section below:
image:
name: cockroachdb/cockroach:v22.1.0
# nodes refers to the number of crdb pods that are created
# via the statefulset
maxUnavailable: 1
minAvailable: 2
nodes: 3
additionalLabels:
daniel: custom-test-lab-operator-deployment
I then applied the example.yml
file:
kubectl apply -f example.yml
crdbcluster.crdb.cockroachlabs.com/cockroachdb configured
The pods still running at this point:
kubectl get pods -n $ns -w
NAME READY STATUS RESTARTS AGE
cluster-init-v56ph 1/1 Running 0 4m12s
cockroach-operator-manager-df946bb6b-vvwhd 1/1 Running 0 4m28s
cockroachdb-0 1/1 Running 0 4m14s
cockroachdb-1 1/1 Running 0 4m14s
cockroachdb-2 1/1 Running 0 4m14s
Now I do a rolling restart:
kubectl rollout restart statefulset cockroachdb -n $ns
statefulset.apps/cockroachdb restarted
and monitor the status, which will show the 3rd pod restarting and getting stuck as shown below and will not proceed:
kubectl get pods -n $ns -w
NAME READY STATUS RESTARTS AGE
cluster-init-v56ph 1/1 Running 0 4m12s
cockroach-operator-manager-df946bb6b-vvwhd 1/1 Running 0 4m28s
cockroachdb-0 1/1 Running 0 4m14s
cockroachdb-1 1/1 Running 0 4m14s
cockroachdb-2 1/1 Running 0 4m14s
cockroachdb-2 1/1 Terminating 0 5m55s
cockroachdb-2 0/1 Terminating 0 6m11s
cockroachdb-2 0/1 Terminating 0 6m11s
cockroachdb-2 0/1 Terminating 0 6m11s
cockroachdb-2 0/1 Pending 0 0s
cockroachdb-2 0/1 Pending 0 0s
cockroachdb-2 0/1 ContainerCreating 0 1s
cockroachdb-2 0/1 Running 0 15s
Notice the new label was not applied yet and the pod status showing it is stuck:
kubectl get pods -n $ns --show-labels
NAME READY STATUS RESTARTS AGE LABELS
cluster-init-v56ph 1/1 Running 0 9m41s controller-uid=1ab53c79-18c2-4506-a0d7-10aa9fcaa918,job-name=cluster-init
cockroach-operator-manager-df946bb6b-vvwhd 1/1 Running 0 9m57s app=cockroach-operator,pod-template-hash=df946bb6b
cockroachdb-0 1/1 Running 0 9m43s app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-6478ffd568,statefulset.kubernetes.io/pod-name=cockroachdb-0
cockroachdb-1 1/1 Running 0 9m43s app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-6478ffd568,statefulset.kubernetes.io/pod-name=cockroachdb-1
cockroachdb-2 0/1 Running 0 3m32s app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-b87596cd9,statefulset.kubernetes.io/pod-name=cockroachdb-2
I then went ahead and commented out the line from the example.yml
that configures the label, snippet shown below:
image:
name: cockroachdb/cockroach:v22.1.0
# nodes refers to the number of crdb pods that are created
# via the statefulset
maxUnavailable: 1
minAvailable: 2
nodes: 3
# additionalLabels:
# daniel: custom-test-lab-operator-deployment
I then applied the changed file:
kubectl apply -f example.yml
crdbcluster.crdb.cockroachlabs.com/cockroachdb configured
I waited a few seconds after the new file was applied above and the pods proceeded to restart successfully. Notice full output below:
kubectl get pods -n $ns -w
NAME READY STATUS RESTARTS AGE
cluster-init-v56ph 1/1 Running 0 4m12s
cockroach-operator-manager-df946bb6b-vvwhd 1/1 Running 0 4m28s
cockroachdb-0 1/1 Running 0 4m14s
cockroachdb-1 1/1 Running 0 4m14s
cockroachdb-2 1/1 Running 0 4m14s
cockroachdb-2 1/1 Terminating 0 5m55s
cockroachdb-2 0/1 Terminating 0 6m11s
cockroachdb-2 0/1 Terminating 0 6m11s
cockroachdb-2 0/1 Terminating 0 6m11s
cockroachdb-2 0/1 Pending 0 0s
cockroachdb-2 0/1 Pending 0 0s
cockroachdb-2 0/1 ContainerCreating 0 1s
cockroachdb-2 0/1 Running 0 15s
cockroachdb-2 1/1 Running 0 6m26s
cockroachdb-1 1/1 Terminating 0 12m
cockroachdb-1 0/1 Terminating 0 12m
cockroachdb-1 0/1 Terminating 0 12m
cockroachdb-1 0/1 Terminating 0 12m
cockroachdb-1 0/1 Pending 0 0s
cockroachdb-1 0/1 Pending 0 0s
cockroachdb-1 0/1 ContainerCreating 0 0s
cockroachdb-1 0/1 Running 0 3s
cockroachdb-1 1/1 Running 0 45s
cockroachdb-0 1/1 Terminating 0 13m
cockroachdb-0 0/1 Terminating 0 13m
cockroachdb-0 0/1 Terminating 0 13m
cockroachdb-0 0/1 Terminating 0 13m
cockroachdb-0 0/1 Pending 0 0s
cockroachdb-0 0/1 Pending 0 0s
cockroachdb-0 0/1 ContainerCreating 0 0s
cockroachdb-0 0/1 Running 0 3s
cockroachdb-0 1/1 Running 0 30s
and now the final output, no custom label applied since the line with the new label has been commented out, but the all pods are running:
kubectl get pods -n $ns --show-labels
NAME READY STATUS RESTARTS AGE LABELS
cluster-init-v56ph 1/1 Running 0 14m controller-uid=1ab53c79-18c2-4506-a0d7-10aa9fcaa918,job-name=cluster-init
cockroach-operator-manager-df946bb6b-vvwhd 1/1 Running 0 15m app=cockroach-operator,pod-template-hash=df946bb6b
cockroachdb-0 1/1 Running 0 68s app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-b87596cd9,statefulset.kubernetes.io/pod-name=cockroachdb-0
cockroachdb-1 1/1 Running 0 2m6s app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-b87596cd9,statefulset.kubernetes.io/pod-name=cockroachdb-1
cockroachdb-2 1/1 Running 0 8m49s app.kubernetes.io/component=database,app.kubernetes.io/instance=cockroachdb,app.kubernetes.io/name=cockroachdb,controller-revision-hash=cockroachdb-b87596cd9,statefulset.kubernetes.io/pod-name=cockroachdb-2