ceph-csi icon indicating copy to clipboard operation
ceph-csi copied to clipboard

Volume provisioning with multiple ceph clusters

Open lechugaletal opened this issue 1 year ago • 4 comments

Volume provisioning with multiple ceph clusters

I'm trying to find if there is a way to provision PVs from multiple ceph clusters given a StatefulSet that has a single StorageClass defined in its volumeClaimTemplates section.

Given a StatefulSet with a spec of spec.volumeClaimTemplates.storageClassName: test-sc (pointing to a single StorageClass), and a storageClass with an hypothetical spec like:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-sc
parameters:
  clusters:
    - clusterID: cluster1
      csi.storage.k8s.io/controller-expand-secret-name: cluster1
      csi.storage.k8s.io/controller-expand-secret-namespace: csi-rbd
      csi.storage.k8s.io/fstype: ext4
      imageFeatures: layering
      pool: rbd
    - clusterID: cluster2
      csi.storage.k8s.io/controller-expand-secret-name: cluster2
      csi.storage.k8s.io/controller-expand-secret-namespace: csi-rbd
      csi.storage.k8s.io/fstype: ext4
      imageFeatures: layering
      pool: rbd
provisioner: rbd.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: Immediate

Could it be possible to provision PVs from cluster1 or cluster2 depending on the zone or region in which the pod is scheduled?

Thank you very much for your help!

lechugaletal avatar Sep 23 '24 10:09 lechugaletal

we had the same requirement https://github.com/ceph/ceph-csi/issues/4611, currently, we don't have a priority for it, we always welcome community contributions for it.

Madhu-1 avatar Sep 23 '24 13:09 Madhu-1

I understand 🤔. I've been reading through Helm values documentation and this parameter seems to be related:

  # topologyConstrainedPools: |
  #   [{"poolName":"pool0",
  #     "dataPool":"ec-pool0" # optional, erasure-coded pool for data
  #     "domainSegments":[
  #       {"domainLabel":"region","value":"east"},
  #       {"domainLabel":"zone","value":"zone1"}]},
  #    {"poolName":"pool1",
  #     "dataPool":"ec-pool1" # optional, erasure-coded pool for data
  #     "domainSegments":[
  #       {"domainLabel":"region","value":"east"},
  #       {"domainLabel":"zone","value":"zone2"}]},
  #    {"poolName":"pool2",
  #     "dataPool":"ec-pool2" # optional, erasure-coded pool for data
  #     "domainSegments":[
  #       {"domainLabel":"region","value":"west"},
  #       {"domainLabel":"zone","value":"zone1"}]}
  #   ]

As far as I understand, I can create an RBD pool in ceph and then use labels in pods/nodes to create some sort of data affinity. The domainSegments property is configured in the ceph cluster, or this is only related with k8s labels onto resources?

Thank you @Madhu-1 for your help!!

lechugaletal avatar Sep 24 '24 07:09 lechugaletal

https://rook.github.io/docs/rook/v1.14/CRDs/Cluster/external-cluster/topology-for-external-mode/#ceph-cluster contains the same documentation, it gives better idea about this feature

Madhu-1 avatar Sep 24 '24 11:09 Madhu-1

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Oct 24 '24 21:10 github-actions[bot]

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

github-actions[bot] avatar Nov 01 '24 21:11 github-actions[bot]