cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[manila-csi-plugin] If Manila CSI plugin supports Access Mode ReadWriteOnce
Hi, In our envrioment, we create Manila PV with Access Mode ReadWriteOnce as below
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: nfs.manila.csi.openstack.org
volume.kubernetes.io/provisioner-deletion-secret-name: manila-csi-plugin
volume.kubernetes.io/provisioner-deletion-secret-namespace: kube-system
finalizers:
- kubernetes.io/pv-protection
name: XXXXX
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
and we don't get any error message. But per our test, it seems the ReadWriteOnce doesn't reallt take effect, when a Pod A on Node M still are in Terminating status, another Pod B on Node N already can mount it and running.
We checked the code, VolumeCapability_AccessMode_SINGLE_NODE_WRITER(ReadWriteOnce) is mentioned, but it seems there is no real logic to control it in the manila-csi-plugin driver.
We also notice a document from OpenShift, actually OpenStack Manila only supports ReadWriteOncePod & ReadWriteMany.
Could U please help to check if Manila CSI plugin supports Access Mode ReadWriteOnce? If not, if there is any other way we can achieve the same function here? Thanks!
there are some issues opened before for this but I am guessing it's not supported ReadWriteOnce I am not sure whether openshift downstream made improvement directly instead of upstream..
@gman0 @zetaab correct me if I am wrong
we are not using manila, so difficult to answer this issue. But in general manila is readwritemany
Hello @syy6, indeed the driver never supported this, and the mode validation is very relaxed because of historical reasons.
We've never really seen too convincing use cases to get this implemented, but in any case it shouldn't be too hard if you only need attachment tracking within the cluster. See https://github.com/kubernetes-csi/external-attacher
Note that the Manila service itself does not support attachments at the moment, and there would be nothing stopping other clients from accessing the share.
Thanks @jichenjc @zetaab @gman0, we have a tricky issue here. We are using ReplicaSet (with replica = 1) for our service, and we have podAntiAffinity for the ReplicaSet, combined with accessModes ReadWriteOnce, we hope to use it to prevent simultaneous access to PVC. But we see the multiple "mount" to manila share can happen when an old version of the Pod of the ReplicaSet is still in Terminating status, but new version of the Pod already starts. In this case, the old & new Pod would write to same file in same time and the file might be corrupted.
What I'm curious is whether we need to implement changes listed here?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.