csi-lvm icon indicating copy to clipboard operation
csi-lvm copied to clipboard

Consider to support volume groups in controller

Open leanenkaa opened this issue 4 years ago • 15 comments

Hi, as I can see here https://github.com/metal-stack/csi-lvm/blob/master/cmd/controller/controller.go#L229 provisioner pod always try to create logical volume in csi-lvm volume group, I think make sense to add ENV variable with volume group in controller and give the ability to create logical volumes in this volume group.

leanenkaa avatar Apr 08 '20 07:04 leanenkaa

Sounds like a valid improvement, however what should be done if this variable gets changed over time ? If you would like to propose a implementation via pull request we will review it.

majst01 avatar Apr 08 '20 07:04 majst01

My opinion that we can use multiple controllers for every volume group if we need this by providing ENV variable with volume group name, the same can be done for provisioner also. If sounds like a good idea I think I can try to propose a PR.

leanenkaa avatar Apr 08 '20 08:04 leanenkaa

OK, but then you have to somehow guarantee that modifying the volumegroup of a existing controller is not possible

majst01 avatar Apr 08 '20 08:04 majst01

If user will change this variable there will be a new controller that will manage another volume group.Yes,I agree that can break smth but if I create provisioner and controller for some volume group and specify device patterns for this volume group I see what I do. If for some reason controller and provisioner pods will de destroyed or recreated they should identify that volume group that was specified as env variable exists on the node. Maybe I don't understand fully what do you mean and with what issues we may face if we add volume groups support, I'll be appreciate if we can discuss this.

leanenkaa avatar Apr 08 '20 08:04 leanenkaa

The problem is that you must store in some way, dont know where, which controller is responsible for which volumegroup initially. Because changing would mean all PVs are lost.

majst01 avatar Apr 08 '20 08:04 majst01

So I've added the env variable with lvm volume group for controller and tried to reproduce the issue that you mentioned above. What I did:

  1. Create controller deployment with test-vg variable for volume group 2.Create some PVCs and then some pods that attach those PVC and create some files in mount paths in the pods 3.Tried to run new controller together with my old with the same test-vg variable, nothing happened, new controller started and waited for leader election,then killed my old controller deployment, new one became the leader, nothing was changed from PVC, PV, logical volumes side, data was in place 4.Turned back to my initial setup and tried to edit my controller deployment with kubectl edit deployment and changed the volume group variable from test-vg to new-test-vg, old pod controller was terminated, new pod controller was started, nothing changed from PVC,PV, volume groups or logical volume side, data was in place 5.Tried to completely delete my controller deployments, nothing was changed

Please suggest which else cases I can test ?

leanenkaa avatar Apr 08 '20 12:04 leanenkaa

Fine, but what happen if you have one controller, with ENV variable set to test-vg, create some PVC, PV then change the deployment of the controller and set the ENV variable to csi-lvm for example. In this case the existing controller will not find the existing PVs and therefore will create new ones, and data which existed on the former will be lost.

majst01 avatar Apr 08 '20 14:04 majst01

I did the same in step 2 only env variable name was another and this new controller did nothing with existing pvc and pv. Will try with env variable csi-lvm but I think nothing will change.

leanenkaa avatar Apr 08 '20 14:04 leanenkaa

You are talking from a different scenario then i did. Only one controller, change volume-group between deployments will destroy your PV, PVC by creating new PVC/PV on the new volume-group and therefore the existing data are lost. Im not talking about 2 or more controllers.

majst01 avatar Apr 08 '20 15:04 majst01

4.Turned back to my initial setup and tried to edit my controller deployment with kubectl edit deployment and changed the volume group variable from test-vg to new-test-vg, old pod controller was terminated, new pod controller was started, nothing changed from PVC,PV, volume groups or logical volume side, data was in place is it what do you mean ? initial setup is one controller and one provisioner, like in example

leanenkaa avatar Apr 08 '20 15:04 leanenkaa

Impossible, but if you dont mind to send a PR to make the vg configurable for the controller as well, i will check

majst01 avatar Apr 08 '20 15:04 majst01

Please check my PR https://github.com/metal-stack/csi-lvm/pull/30, maybe I did smth wrong when I tested it. Let's discuss and find where I was wrong.

leanenkaa avatar Apr 08 '20 15:04 leanenkaa

@majst01 did you have time to check ?

leanenkaa avatar Apr 09 '20 09:04 leanenkaa

already merged.

majst01 avatar Apr 09 '20 09:04 majst01

I mean the case if we edit vg name in controller deployment and controller should remove pv, did you check this ? if yes, can you share the results please.

leanenkaa avatar Apr 09 '20 09:04 leanenkaa