mayastor icon indicating copy to clipboard operation
mayastor copied to clipboard

Handle new nodes scaled up and down with new names

Open allensiho opened this issue 1 year ago • 1 comments

It seems Diskpool requires you to know the nodenames before hand

cat <<EOF | kubectl create -f - apiVersion: "openebs.io/v1alpha1" kind: DiskPool metadata: name: pool-on-node-3 namespace: mayastor spec: node: node3 disks: ["/dev/sdc"] EOF

This is problematic if you do not have this information as nodes will get added and removed on demand for Azure Kubernetes and will get new node names

I think it will be better to target disk pools by a common node label if possible

That way new nodes spun up that have this common label will have the diskpool associated with it

https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler?tabs=azure-cli

allensiho avatar Jan 04 '24 08:01 allensiho

This is an interesting problem, and we might be able to solve in different ways:

  1. k8s/aks specific component would detect pool disks being moved to another node and updating the control-plane with the new one.
  2. data-plane could detect new disks and check with control-plane if there are any disks moved..
  3. control-plane itself could probe node disks, and check if any moved (actually this probably can work together with 2)

tiagolobocastro avatar Jan 20 '24 19:01 tiagolobocastro