vsphere-csi-driver icon indicating copy to clipboard operation
vsphere-csi-driver copied to clipboard

[Feature] Ability to provision RWM volumes from VMFS datastores

Open BenB196 opened this issue 3 years ago • 18 comments

FEATURE REQUEST:

/kind feature

I would like to have the ability to provision RWM volumes from VMFS datastores.

For a Vcenter configuration like: Vcenter uses an external SAN to provision VMFS datastores (with no vSAN configuration). Currently if using a VMFS datastore only RWO volumes are supported. It would be nice to have the ability to provision RWM volumes from a VMFS datastore.

BenB196 avatar Jul 22 '22 20:07 BenB196

If this is possible at all I would also like to have this feature.

thomasrootdv avatar Aug 05 '22 11:08 thomasrootdv

I mean, in theory VMFS supports multi-read-write access, doesn't it? As long as the filesystem you put on the VMFS block volume supports it that is of course. So I was already wondering why this seems to be such a big deal. Maybe someone from the CSI devs can eleborate on the subject?

omniproc avatar Sep 06 '22 07:09 omniproc

@BenB196 @thomasrootdv - What's the use case you're targeting? Do you want volumes backed by vmdk to be attached to multiple node VMs?

gohilankit avatar Nov 03 '22 21:11 gohilankit

The use case for my end is we have web applications which we want to be highly available, but require a shared backend storage layer, so that they can both read/write to the same files. While generally something like NFS would be usable here, we'd prefer to not need to provision a second way of supplying NFS, while also not needing to use vSAN if possible.

BenB196 avatar Nov 04 '22 00:11 BenB196

The use case for my end is we have web applications which we want to be highly available, but require a shared backend storage layer, so that they can both read/write to the same files. While generally something like NFS would be usable here, we'd prefer to not need to provision a second way of supplying NFS, while also not needing to use vSAN if possible.

Same here!

thomasrootdv avatar Nov 04 '22 05:11 thomasrootdv

The use case for my end is we have web applications which we want to be highly available, but require a shared backend storage layer, so that they can both read/write to the same files. While generally something like NFS would be usable here, we'd prefer to not need to provision a second way of supplying NFS, while also not needing to use vSAN if possible.

Same here!

My org has a similar request and vSAN has been ruled out as an option by our VMware team.

tgelter avatar Nov 09 '22 18:11 tgelter

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 07 '23 18:02 k8s-triage-robot

/remove-lifecycle stale

tgelter avatar Feb 08 '23 00:02 tgelter

I also wonder why this is not an option with CSI for vSphere. CSI supports multiple writers as long as the filesystem is capable. Is it not up to the user and/or guest to determine whether not a shared filesystem is in use? Why prevent someone from doing this?

akutz avatar Feb 18 '23 18:02 akutz

/remove-lifecycle stale

divyenpatel avatar Apr 07 '23 19:04 divyenpatel

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 06 '23 19:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jan 19 '24 13:01 k8s-triage-robot

/remove-lifecycle rotten

jbartyze avatar Jan 19 '24 18:01 jbartyze

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 18 '24 18:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 18 '24 18:05 k8s-triage-robot