piraeus-operator icon indicating copy to clipboard operation
piraeus-operator copied to clipboard

Disk and Diskless nodes setup

Open AndreiPaulau opened this issue 4 years ago • 3 comments

Hello,

I've read a bunch of docs and a bit confused. May I kindly ask you to help me in the following questions. I have a setup of 5 worker node cluster. 3 of 5 nodes (worker node count us constantly growth) have 1 Tb ssd disk which i'm going to utilize for dynamic provisioning. What would be the proper settings to set only few of this nodes for storage provisioning, but the rest nodes should be only diskless (consumers). From my understanding tolerations and affinity on satellites would lock them on proper nodes with 1 Tb disk, but won't let them run on nodes, which are intended to consume storage over the network (diskless) - am i right? The current setup (w/o affinity & tolerations) shows the following. ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ ┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊ ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡ ┊ DfltDisklessStorPool ┊ calc2 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊ ┊ DfltDisklessStorPool ┊ k8s-worker0 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊ ┊ DfltDisklessStorPool ┊ k8s-worker1 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊ ┊ DfltDisklessStorPool ┊ k8s-worker2 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊ ┊ DfltDisklessStorPool ┊ k8s-worker3 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊ ┊ lvm-thin ┊ calc2 ┊ LVM_THIN ┊ linstor_thinpool/thinpool ┊ 0 KiB ┊ 0 KiB ┊ True ┊ Error ┊ ┊ ┊ lvm-thin ┊ k8s-worker0 ┊ LVM_THIN ┊ linstor_thinpool/thinpool ┊ 99.79 GiB ┊ 99.80 GiB ┊ True ┊ Ok ┊ ┊ ┊ lvm-thin ┊ k8s-worker1 ┊ LVM_THIN ┊ linstor_thinpool/thinpool ┊ 99.79 GiB ┊ 99.80 GiB ┊ True ┊ Ok ┊ ┊ ┊ lvm-thin ┊ k8s-worker2 ┊ LVM_THIN ┊ linstor_thinpool/thinpool ┊ 99.80 GiB ┊ 99.80 GiB ┊ True ┊ Ok ┊ ┊ ┊ lvm-thin ┊ k8s-worker3 ┊ LVM_THIN ┊ linstor_thinpool/thinpool ┊ 0 KiB ┊ 0 KiB ┊ True ┊ Error ┊ ┊ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ I don't really like errors in the output =), thus i remove errored sp's, but they were discovered automatically. What would be the right config options to achieve my goal?

Can you please give a simple example of the following SC property disklessOnRemaining: "true"?

And one more thing, what does autoPlace: "2" mean:

  1. One pv + 2 replicas
  2. One pv + 1 replica Many thanks!

AndreiPaulau avatar Oct 08 '21 14:10 AndreiPaulau

Hello!

What you want to do is have 2 different linstorsatelliteset resources (the helm chart creates one of those for you, as that is the most common setup), each with the right affinity set.

For your diskful nodes:

apiVersion: piraeus.linbit.com/v1
kind: LinstorSatelliteSet
metadata:
  name: piraeus-ns-diskful
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - k8s-worker0
            - k8s-worker1
            ...
  storagePools:
    lvmThinPools:
    - name: lvm-thin
      ....

and for your diskless nodes:

apiVersion: piraeus.linbit.com/v1
kind: LinstorSatelliteSet
metadata:
  name: piraeus-ns-diskless
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - calc2
            ...
  storagePools: {}

And one more thing, what does autoPlace: "2" mean:

That means for 1 PV you have the data stored on 2 diskful nodes, so that if one diskful node fails you can still access the data via the second replica.

WanzenBug avatar Oct 11 '21 08:10 WanzenBug

Many thanks, looks pretty logic and straightforward =) Will try to accomplish such setup!

AndreiPaulau avatar Oct 11 '21 08:10 AndreiPaulau

Just in case it would be handy for someone else, Create default values.yml for the initial setup.

csi-snapshotter:
  enabled: false
stork:
  enabled: false
haController:
  enabled: false
operator:
  satelliteSet:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - worker0
              - worker1
              - worker2
    storagePools:
      kernelModuleInjectionImage: quay.io/piraeusdatastore/drbd9-focal
      lvmThinPools:
      - name: lvm-thin
        thinVolume: thinpool
        volumeGroup: ""
        devicePaths:

Then, after successful setup create additional 'LinstorSatelliteSet' resource. using 'NotIn'

apiVersion: piraeus.linbit.com/v1
kind: LinstorSatelliteSet
metadata:
  annotations:
    meta.helm.sh/release-name: piraeus-op
    meta.helm.sh/release-namespace: piraeus
  finalizers:
    - finalizer.linstor-node.linbit.com
  generation: 9
  labels:
    app.kubernetes.io/managed-by: Helm
  name: piraeus-ns-diskless
  namespace: piraeus

spec:
  additionalEnv: null
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: NotIn
            values:
            - worker0
            - worker1
            - worker2
  automaticStorageType: None
  controllerEndpoint: 'http://piraeus-op-cs.piraeus.svc:3370'
  drbdRepoCred: ''
  imagePullPolicy: IfNotPresent
  kernelModuleInjectionImage: quay.io/piraeusdatastore/drbd9-focal
  kernelModuleInjectionMode: Compile
  kernelModuleInjectionResources: {}
  linstorHttpsClientSecret: ''
  monitoringImage: 'quay.io/piraeusdatastore/drbd-reactor:v0.3.0'
  priorityClassName: ''
  resources: {}
  satelliteImage: 'quay.io/piraeusdatastore/piraeus-server:v1.13.0'
  serviceAccountName: ''
  sslSecret: null
  storagePools: {}

Just wonder, if there is a possibility to pass 'name' within values.yml for kind: LinstorSatelliteSet? Again, many thanks, feel free to close the issuse =)

AndreiPaulau avatar Nov 08 '21 15:11 AndreiPaulau

This now works much more naturally: https://github.com/piraeusdatastore/piraeus-operator/blob/v2/docs/reference/linstorsatelliteconfiguration.md

WanzenBug avatar Feb 28 '24 08:02 WanzenBug