local-path-provisioner
local-path-provisioner copied to clipboard
Does local-path super multiple sc for deffrence performance disk
I have hdd and ssd installed in host server. I want create two k8s storageclass to support for both hdd and ssd to adapt different performance requirement. Is local-path supported this and how to do with it
I would say that you can create 2 or more installations of Local Path Provisioners each with different disks configured and with different names for each Storage Class. This way you can reference different disk types using specific SC.
Thanks for your replay, I can not found any documents about what you told me. for example, how to define provisioner myself. I will do more research. and left a update i get any move forward
apiVersion: v1 kind: Namespace metadata: name: local-path-storage-ssd
apiVersion: v1 kind: ServiceAccount metadata: name: local-path-provisioner-service-account namespace: local-path-storage-ssd
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: local-path-provisioner-role-ssd rules:
- apiGroups: [ "" ] resources: [ "nodes", "persistentvolumeclaims", "configmaps" ] verbs: [ "get", "list", "watch" ]
- apiGroups: [ "" ] resources: [ "endpoints", "persistentvolumes", "pods" ] verbs: [ "*" ]
- apiGroups: [ "" ] resources: [ "events" ] verbs: [ "create", "patch" ]
- apiGroups: [ "storage.k8s.io" ] resources: [ "storageclasses" ] verbs: [ "get", "list", "watch" ]
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: local-path-provisioner-bind-ssd roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: local-path-provisioner-role-ssd subjects:
- kind: ServiceAccount name: local-path-provisioner-service-account namespace: local-path-storage-ssd
apiVersion: apps/v1 kind: Deployment metadata: name: local-path-provisioner-ssd namespace: local-path-storage-ssd spec: replicas: 1 selector: matchLabels: app: local-path-provisioner template: metadata: labels: app: local-path-provisioner spec: serviceAccountName: local-path-provisioner-service-account containers: - name: local-path-provisioner image: rancher/local-path-provisioner:v0.0.22 imagePullPolicy: IfNotPresent command: - local-path-provisioner - --debug - start - --config - /etc/config/config.json volumeMounts: - name: config-volume mountPath: /etc/config/ env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: PROVISIONER_NAME value: rancher.io/local-path-ssd volumes: - name: config-volume configMap: name: local-path-config
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-path-ssd provisioner: rancher.io/local-path-ssd volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete
kind: ConfigMap apiVersion: v1 metadata: name: local-path-config namespace: local-path-storage-ssd data: config.json: |- { "nodePathMap":[ { "node":"k3s", "paths":["/raid1"] } ] } setup: |- #!/bin/sh set -eu mkdir -m 0777 -p "$VOL_DIR" teardown: |- #!/bin/sh set -eu rm -rf "$VOL_DIR" helperPod.yaml: |- apiVersion: v1 kind: Pod metadata: name: helper-pod spec: containers: - name: helper-pod image: busybox imagePullPolicy: IfNotPresent
apiVersion: v1 kind: Namespace metadata: name: local-path-storage-ssd
apiVersion: v1 kind: ServiceAccount metadata: name: local-path-provisioner-service-account namespace: local-path-storage-ssd
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: local-path-provisioner-role-ssd rules:
- apiGroups: [ "" ] resources: [ "nodes", "persistentvolumeclaims", "configmaps" ] verbs: [ "get", "list", "watch" ]
- apiGroups: [ "" ] resources: [ "endpoints", "persistentvolumes", "pods" ] verbs: [ "*" ]
- apiGroups: [ "" ] resources: [ "events" ] verbs: [ "create", "patch" ]
- apiGroups: [ "storage.k8s.io" ] resources: [ "storageclasses" ] verbs: [ "get", "list", "watch" ]
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: local-path-provisioner-bind-ssd roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: local-path-provisioner-role-ssd subjects:
- kind: ServiceAccount name: local-path-provisioner-service-account namespace: local-path-storage-ssd
apiVersion: apps/v1 kind: Deployment metadata: name: local-path-provisioner-ssd namespace: local-path-storage-ssd spec: replicas: 1 selector: matchLabels: app: local-path-provisioner template: metadata: labels: app: local-path-provisioner spec: serviceAccountName: local-path-provisioner-service-account containers: - name: local-path-provisioner image: rancher/local-path-provisioner:v0.0.22 imagePullPolicy: IfNotPresent command: - local-path-provisioner - --debug - start - --config - /etc/config/config.json volumeMounts: - name: config-volume mountPath: /etc/config/ env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: PROVISIONER_NAME value: rancher.io/local-path-ssd volumes: - name: config-volume configMap: name: local-path-config
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-path-ssd provisioner: rancher.io/local-path-ssd volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete
kind: ConfigMap apiVersion: v1 metadata: name: local-path-config namespace: local-path-storage-ssd data: config.json: |- { "nodePathMap":[ { "node":"k3s", "paths":["/raid1"] } ] } setup: |- #!/bin/sh set -eu mkdir -m 0777 -p "$VOL_DIR" teardown: |- #!/bin/sh set -eu rm -rf "$VOL_DIR" helperPod.yaml: |- apiVersion: v1 kind: Pod metadata: name: helper-pod spec: containers: - name: helper-pod image: busybox imagePullPolicy: IfNotPresent
env:
- name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
- name: PROVISIONER_NAME value: rancher.io/local-path-ssd
I'm currently hitting the same thing, tbh this feels like it's semantically wrong. The point of a storage class is exactly to define classes of storage (i.e. fast=ssd, slower=hdd etc.) which may differ in parameters/options.
I believe the provisioner should take the paths from the storage class definition rather than from the centralized config map. This way the paths/sharedFileSystemPath could be defined per storageClass and you don't need to have multiple provisioner pods running inside of the same cluster (in my case it's 5 installations of the provisioner which seems rather cumbersome...)
A way to implement this in backward compatible way would be to let the user define these fields in both places and letting the storageClass behave as an override of the default values found in the configMap.