spec
spec copied to clipboard
volumeID is not enough as a parameter for the function such as "DeleteVolume & NodeUnpublishVolume"
For example: Because of that when deleting volume, we want to know where is the storage( about the storage's URL、username and password for login storage), I think it's needed to pass additional parameters as same as CreateVolume and NodePublishVolume. we don't want the volumeID or snapshotID to be assembled by other keys(for example, url、username、password and so on)
@saad-ali https://github.com/container-storage-interface/spec/issues/387
URL can be deployed with the driver as a driver configuration parameter and user/password info can be retrieved from the secret.
If we deploy driver with URL that one kubernetes cluster only supports one storage cluster. @venkatsc
To deploy multiple instances, deploy each with their own endpoint URL and provisioner name. Use the appropriate provisioner name in the created StorageClass
.
You could refer our Quobyte driver. We deployed two instances of the driver, each operating with their own storage cluster.
as you can encode the volumeID
you can make use of configmap with some unique Key to identify the URL current we are doing same in ceph-csi to fetch monitor configuration
To deploy multiple instances, deploy each with their own endpoint URL and provisioner name. Use the appropriate provisioner name in the created
StorageClass
.You could refer our Quobyte driver. We deployed two instances of the driver, each operating with their own storage cluster.
If there are many of storage cluster, you will deploy many driver in the k8s cluster, I think this is not good.
as you can encode the
volumeID
you can make use of configmap with some unique Key to identify the URL current we are doing same in ceph-csi to fetch monitor configuration
But it's difficult for static provisioner. we should create configmaps in static provisioner.
If there are many of storage cluster, you will deploy many driver in the k8s cluster, I think this is not good.
As explained in here, both approaches has its own advantages and disadvantages. For now, the only (clean) alternative way is to deploy driver per storage cluster.
the
volumeID
you can make use of configmap with some unique Key to identify the URL current
Yes, that is other alternative. But appending URL to volumeID
can make things bit murky if URL changes. As changing in configmap does make automatic update of already provisioned volumes. This leaves configuration data scatter around all the places. Passing StorageClass
parameters (at least, with some prefix based keys, if not all parameters) during all the CSI calls to the driver would make things little more flexible.
Is there a better way to solve this problem? When deleting the pod that we should login the backend stroage to do more things, But we only achieve "NodeUnpublth ishVolume" function, there is no more params with it. @saad-ali @venkatsc @Madhu-1
Is there a better way to solve this problem? When deleting the pod that we should login the backend stroage to do more things, But we only achieve "NodeUnpublth ishVolume" function, there is no more params with it. @saad-ali @venkatsc @Madhu-1
@zhucan I also filed a similar feature request here: https://github.com/container-storage-interface/spec/issues/507, we need pvc namespace in DeleteVolume
, if there is VolumeContext
in DeleteVolumeRequest
, we could inject volume.Spec.ClaimRef.Namespace
in external-provisioner by providing flag --extra-delete-metadata
, similar to --extra-create-metadata
: https://github.com/kubernetes-csi/external-provisioner#recommended-optional-arguments.
VolumeContext
should also be in DeleteVolumeRequest
, currently we have to encoding all info as a volumeID now the volumeID is longer and longer, that's really ugly design.
i'm trying to understand the use case being presented here. it sounds like there's a desire for a single k8s csi driver installation to integrate w/ multiple storage backends (of the same vendor), and at the same time the desire to avoid coding a URL into the volume ID because .. it feels messy.
IMO, if you're managing vols across multiple backends via single driver instance then you need the backend coordinate somewhere .. so why not vol ID? if there is concern about coordinates changing (e.g. URLs changing) then it sounds like a layer of indirection is missing.
it almost sounds like you want some kind of gateway intermediate that implements CSI, which proxies access to actual storage backends, and gives you more flexibility w/ respect to stable volume coordinates.
csi proxies have been discussed for years now. dell/emc ships at least one (and it's OSS). i'm sure that others do as well. it seems like a solvable problem outside of changes made to the specification.
WDYT?
@jdef point is in current CSI spec, there is already VolumeContext
in CreateVolume
, NodePublishVolume
, why not provide same VolumeContext
in DeleteVolume
, NodeUnpublishVolume
, without VolumeContext
in DeleteVolume
, we have to encode every fields from CreateVolumeRequest.VolumeContext
to volumeid.
The problem is that when the CO is calling DeleteVolume
or NodeUnpublishVolume
, it might not have the VolumeContext. It might only have the volume ID. There are scenarios where the CO has experienced a crash or some kind of data loss and it's just trying to clean up. Under these circumstances we require the SP to remember everything it needs to also be able to perform the node cleanup or the volume deletion.
I know this is inconvenient for CSI driver authors, but it was a tradeoff that was consciously made, and it's too late to change it now. I've written a few CSI drivers, and I typically deal with this challenge by making sure the volume ID is truly a unique key that I can use to get the information I need from some other data store -- either a data store on the storage device itself, or a local persistent file on the node, keyed by volume ID. You are free to pick a volume ID that is meaningful to your driver, as long as it doesn't exceed the maximum length limits.