linstor-csi icon indicating copy to clipboard operation
linstor-csi copied to clipboard

failed to provision volume with StorageClass

Open sd2020bs opened this issue 3 years ago • 4 comments

i have linstor with 2 nodes with lvm-thin: linstor storage-pool list
┊ data ┊ linstor-ctrl ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 19.96 GiB ┊ 19.96 GiB ┊ True ┊ Ok ┊
┊ data ┊ linstor-satel1 ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 19.96 GiB ┊ 19.96 GiB ┊ True ┊ Ok ┊ and group named linstor-basic-storage. i deployed csi-driver linstor in my k8-cluster. created sc, pvc and got error in command kubectl describe pvc my-first-linstor-volume

Warning ProvisioningFailed 13m (x16 over 51m) linstor.csi.linbit.com_linstor-csi-controller-0_b3c98016-4e61-4650-ab86-2c1167f5d047 failed to provision volume with StorageClass "linstor-basic-storage": error generating accessibility requirements: no available topology found my sc:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata: name: linstor-basic-storage provisioner: linstor.csi.linbit.com parameters: placementCount: "2" storagePool: "data" resourceGroup: "linstor-basic-storage"

my pvc:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-first-linstor-volume spec: storageClassName: linstor-basic-storage accessModes: - ReadWriteOnce resources: requests: storage: 500Mi

How can i solve this problem?

sd2020bs avatar May 04 '21 13:05 sd2020bs

i have find the reason. linstor csi nodes are down. linstor-csi-node-bvtkh 1/2 CrashLoopBackOff 35 15h linstor-csi-node-rp9bf 1/2 CrashLoopBackOff 31 15h linstor-csi-node-zrdh7 1/2 CrashLoopBackOff 37 15h kubectl logs linstor-csi-node-zrdh7 -n kube-system csi-node-driver-registrar time="2021-05-05T03:18:04Z" level=debug msg="curl -X 'GET' -H 'Accept: application/json' 'http://192.168.1.150:3370/v1/nodes/node1/storage-pools'" time="2021-05-05T03:18:04Z" level=debug msg="Status code not within 200 to 400, but 404 (Not Found)\n" time="2021-05-05T03:18:04Z" level=error msg="method failed" func="github.com/sirupsen/logrus.(*Entry).Error" file="/go/pkg/mod/github.com/sirupsen/[email protected]/entry.go:297" error="failed to retrieve node topology: failed to get storage pools for node: 404 Not Found" linstorCSIComponent=driver method=/csi.v1.Node/NodeGetInfo nodeID=node1 provisioner=linstor.csi.linbit.com req= resp="" version=v0.12.1 time="2021-05-05T03:18:17Z" level=debug msg="method called" func="github.com/sirupsen/logrus.(*Entry).Debug" file="/go/pkg/mod/github.com/sirupsen/[email protected]/entry.go:277" linstorCSIComponent=driver method=/csi.v1.Identity/GetPluginInfo nodeID=node1 provisioner=linstor.csi.linbit.com req= resp="name:"linstor.csi.linbit.com" vendor_version:"v0.12.1" " version=v0.12.1

But i don't understand why this variable $(KUBE_NODE_NAME) is used for getting storage pools? I have storage pool on different vms.

sd2020bs avatar May 05 '21 03:05 sd2020bs

Hello!

You don't need a storage pool, but at least a satellite running on the host. This can be directly on the host or as a daemonset (like the one configured by the piraeus-operator. The LINSTOR satellite is responsible for creating the DRBD diskless resource that the node attaches to, the CSI-Node pod just prepares/mounts the device created by the satellite.

WanzenBug avatar May 05 '21 08:05 WanzenBug

Does this storage work like openebs with block devices? I mean, every k8-node must have the same disks? Can this storage work in k8-cluster with external pools (on non-clustered vms)?

sd2020bs avatar May 05 '21 17:05 sd2020bs

I mean, every k8-node must have the same disks

No you, can have completely different disk and storage pools on every node

Can this storage work in k8-cluster with external pools

Yes. the only requirement is that your kubernetes cluster nodes are part of the overall linstor cluster (i.e. they have to have a satellite configured). The kubernetes nodes don't need a disk configured.

WanzenBug avatar May 06 '21 11:05 WanzenBug