piraeus-operator
piraeus-operator copied to clipboard
Question about Helm install
Hi there,
Small question, I do see that automaticStorageType is depricated. Do you have an example for helm with the option on the command line how to use the storagepools and volumes ?
Thanks!
Take a look at: https://github.com/piraeusdatastore/piraeus-operator/blob/master/doc/storage.md
I don't think you can add these "complex" objects via the command line directly, so you have to use the yaml file + -f option
Ok, thanks. I am using this now, can you please confirm or give some advice how to set it up properly? I do have stork disabled and 3 replicas, this is for testing later. Ofcourse I can move all the commandline options to the yaml file as well, but that's also for later.
helm upgrade --install piraeus ./charts/piraeus -f storagepool.yaml \
--namespace piraeus \
--create-namespace \
--set csi.controllerReplicas=3 \
--set etcd.replicas=3 \
--set operator.replicas=3 \
--set operator.controller.replicas=3 \
--set stork.enabled=false \
--set stork.replicas=3 \
--set operator.satelliteSet.kernelModuleInjectionImage=quay.io/piraeusdatastore/drbd9-focal
the file storagepool.yaml contains (sdc is my empty disk):
operator:
satelliteSet:
storagePools:
lvmThinPools:
- name: lvm-thin
thinVolume: thinpool
volumeGroup: linstor_thinpool
devicePaths:
- /dev/sbc
Thanks!
You have a typo in /dev/sbc should be /dev/sdc.
Thanks, however, I do not have a clue what's wrong, I think I forgot one step:
+----------------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
|==============================================================================================================================================|
| DfltDisklessStorPool | k8s-worker1 | DISKLESS | | | | False | Ok | |
| DfltDisklessStorPool | k8s-worker2 | DISKLESS | | | | False | Ok | |
| DfltDisklessStorPool | k8s-worker3 | DISKLESS | | | | False | Ok | |
| lvm-thin | k8s-worker1 | LVM_THIN | linstor_thinpool/thinpool | 0 KiB | 0 KiB | True | Error | |
| lvm-thin | k8s-worker2 | LVM_THIN | linstor_thinpool/thinpool | 0 KiB | 0 KiB | True | Error | |
| lvm-thin | k8s-worker3 | LVM_THIN | linstor_thinpool/thinpool | 0 KiB | 0 KiB | True | Error | |
+----------------------------------------------------------------------------------------------------------------------------------------------+
ERROR:
Description:
Node: 'k8s-worker1', storage pool: 'lvm-thin' - Failed to query free space from storage pool
Cause:
Volume group 'linstor_thinpool' not found
ERROR:
Description:
Node: 'k8s-worker2', storage pool: 'lvm-thin' - Failed to query free space from storage pool
Cause:
Volume group 'linstor_thinpool' not found
ERROR:
Description:
Node: 'k8s-worker3', storage pool: 'lvm-thin' - Failed to query free space from storage pool
Cause:
Volume group 'linstor_thinpool' not found
❯ kubectl exec deployment/piraeus-cs-controller -n piraeus -- linstor physical-storage list
+--------------------------------------------------+
| Size | Rotational | Nodes |
|==================================================|
| 26843545600 | True | k8s-worker3[/dev/sdc] |
| | | k8s-worker1[/dev/sdc] |
| | | k8s-worker2[/dev/sdc] |
+--------------------------------------------------+
Can you help me?
I have also tested it without a name at volumeGroup and with "" as name os the volumeGroup, but then piraeus won't start:
operator:
satelliteSet:
storagePools:
lvmThinPools:
- name: lvm-thin
thinVolume: thinpool
volumeGroup: ""
devicePaths:
- /dev/sdc
operator:
satelliteSet:
storagePools:
lvmThinPools:
- name: lvm-thin
thinVolume: thinpool
volumeGroup:
devicePaths:
- /dev/sdc
Take a look at the error reports linstor error-report list/linstor error-report show <id>, maybe the device wasn't detected as empty, in which case you would need to manually clean it up and create the thinpool.
When I set a name for the volumeGroup, all pods are running, but there is a problem with the discs. When I test as above, not all pods are running, ie. the piraeus-cs deployment is not visible at all. The csi-nodes are failing then:
2021-09-30T17:43:31.579577563+02:00 I0930 15:43:31.579470 1 main.go:164] Version: v2.3.0
2021-09-30T17:43:31.579692804+02:00 I0930 15:43:31.579637 1 main.go:165] Running node-driver-registrar in mode=registration
2021-09-30T17:43:31.580890336+02:00 I0930 15:43:31.580840 1 main.go:189] Attempting to open a gRPC connection with: "/csi/csi.sock"
2021-09-30T17:43:31.580973838+02:00 I0930 15:43:31.580940 1 connection.go:154] Connecting to unix:///csi/csi.sock
2021-09-30T17:43:31.581498596+02:00 I0930 15:43:31.581466 1 main.go:196] Calling CSI driver to discover driver name
2021-09-30T17:43:31.581559331+02:00 I0930 15:43:31.581532 1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
2021-09-30T17:43:31.582685522+02:00 I0930 15:43:31.581581 1 connection.go:184] GRPC request: {}
2021-09-30T17:43:31.583642707+02:00 I0930 15:43:31.583574 1 connection.go:186] GRPC response: {"name":"linstor.csi.linbit.com"}
2021-09-30T17:43:31.583699538+02:00 I0930 15:43:31.583673 1 connection.go:187] GRPC error: <nil>
2021-09-30T17:43:31.583741774+02:00 I0930 15:43:31.583719 1 main.go:206] CSI driver name: "linstor.csi.linbit.com"
2021-09-30T17:43:31.583858340+02:00 I0930 15:43:31.583831 1 node_register.go:52] Starting Registration Server at: /registration/linstor.csi.linbit.com-reg.sock
2021-09-30T17:43:31.584086261+02:00 I0930 15:43:31.584057 1 node_register.go:61] Registration Server started at: /registration/linstor.csi.linbit.com-reg.sock
2021-09-30T17:43:31.584252321+02:00 I0930 15:43:31.584223 1 node_register.go:91] Skipping healthz server because HTTP endpoint is set to: ""
2021-09-30T17:43:33.225805930+02:00 I0930 15:43:33.225566 1 main.go:100] Received GetInfo call: &InfoRequest{}
2021-09-30T17:43:33.226714140+02:00 I0930 15:43:33.226566 1 main.go:107] "Kubelet registration probe created" path="/var/lib/kubelet/plugins/linstor.csi.linbit.com/registration"
2021-09-30T17:43:33.285809629+02:00 I0930 15:43:33.285713 1 main.go:118] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: rpc error: code = Unknown desc = failed to retrieve node topology: failed to get storage pools for node: Get "http://piraeus-cs.piraeus.svc:3370/v1/nodes/k8s-worker2/storage-pools": dial tcp: lookup piraeus-cs.piraeus.svc on 10.96.0.10:53: no such host,}
2021-09-30T17:43:33.285880649+02:00 E0930 15:43:33.285853 1 main.go:120] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: rpc error: code = Unknown desc = failed to retrieve node topology: failed to get storage pools for node: Get "http://piraeus-cs.piraeus.svc:3370/v1/nodes/k8s-worker2/storage-pools": dial tcp: lookup piraeus-cs.piraeus.svc on 10.96.0.10:53: no such host, restarting registration container.
I do need the deployment to start the linstor cli as far as I understood.
Docs have been completely rewritten: https://github.com/piraeusdatastore/piraeus-operator/tree/v2/docs