Design and Implement a Cluster Scale Up/Down Mechanism
We need to design a mechanism for scaling a cluster up and down.
When a user modifies spec.replicas, the cluster should scale to the required number of replicas accordingly. Currently, we are utilizing a StatefulSet, but we understand that we might have to move away from it in favor of a custom pod controller.
Scaling up should work out of the box, but scaling down might be more complex due to several considerations:
- The need to check the quorum state
- The process of removing a replica from the cluster
- Maybe should we disallow scaling down the etcd
We're open to suggestions on how to address these challenges and implement an efficient and reliable scaling mechanism.
Another idea to consider is if a user manually recreate a replica (by deleting the pod and the PVC). In such cases we need to verify within the cluster that the old replica no longer exists.
Cluster rescaling proposal
etcd operator should be able to scale cluster up and down and react to pod deletion or PVC deletion.
Scaling procedure
There should be fields status.replicas and status.instanceNames in order to understand what instances are members,
which of them should become members and which of them should be removed.
We should introduce new status condition Rescaling that will be False if everything is fine and True if
cluster currently is rescaling or fixing, for example when pod (in case of emptyDir) or PVC is deleted.
Cluster state configmap should contain ETCD_INITIAL_CLUSTER only from the list of status.instanceNames as they're
healthy cluster members.
Status reconciliation
Field status.replicas should be filled on reconciliation based on current number of ready replicas if cluster is not
in rescaling state. Firstly, it's filled when cluster is bootstrapped.
Field status.instanceNames should be filled on reconciliation based on current ready replicas if cluster is not in
rescaling state.
Scaling up
When spec.replicas > status.replicas operator should scale cluster up.
Process is the following:
- Check that cluster currently has quorum. If not, exit reconciliation loop and wait until it becomes healthy
- Provided that cluster has quorum, it is safe to perform scaling up.
- Update StatefulSet in accordance with
spec.replicas - Update EtcdCluster status condition, set
RescalingtoTruewithReason: ScalingClusterUp - Execute
etcdctl member addfor each new member - Wait until StatefulSet becomes Ready
- Update
status.replicasandstatus.instanceNamesin accordance withspec.replicasand current pod names - Update EtcdCluster status condition, set
RescalingtoFalsewithReason: ReplicasMatchSpec - Then update cluster state ConfigMap
ETCD_INITIAL_CLUSTERaccording tostatus.instanceNames
In case of errors, EtcdCluster will be stuck on Recaling stage without damaging cluster.
If user cancellation (by updating EtcdCluster's spec.replicas to old value), StatefulSet spec.replicas should
be reverted back and status condition for Rescaling should be set to False.
If user sets spec.replicas < status.replicas to both cancel scaling up and perform scaling down, we should update
StatefulSet's spec.replicas to status.replicas of CR and set Rescaling to False and schedule new reconciliation.
Scaling down
When spec.replicas < status.replicas operator should scale cluster down.
Process is the following:
- Check that cluster currently has quorum. If not, exit reconciliation loop and wait until it becomes healthy. Scaling down is not possible as changes to memberlist must be agreed by quorum.
- Provided that cluster has quorum, it is safe to perform scaling down.
- Operation should on a per-pod basis. Only one pod can be safely deleted at once.
- Calculate last pod name as
idx=status.replicas - 1->crdName-$(idx) - Update EtcdCluster status condition to
Rescaling, statusTrueandReason: ScalingClusterDown - Update StatefulSet's
spec.replicastospec.replicas - 1 - Connect to etcd cluster using
Serviceas root and run send command likeetcdctl member remove crdName-$(idx).\n Running this command with an alive pod should be safe as pod should be already sent theSIGTERMsignal by kubelet. - Update EtcdCluster status condition, set
RescalingtoFalsewithReason: ReplicasMatchSpec - If
spec.replicas<status.replicas, reschedule reconcile to run this algorithm from the beginning