csi-driver-nfs icon indicating copy to clipboard operation
csi-driver-nfs copied to clipboard

Add `subDir` parameter support in `VolumeSnapshotClass`

Open bells17 opened this issue 9 months ago • 3 comments

Is your feature request related to a problem?/Why is this needed**

Currently, the subDir parameter is supported in StorageClass, allowing PVCs to be created inside a specific folder. This is particularly useful for organizing and separating data by directories in the NFS CSI driver.

However, when creating snapshots via VolumeSnapshot, there is no option to specify subDir. As a result, snapshots are always stored in the root directory of the NFS server, with names like snapshot-<snapshotUID>. https://github.com/kubernetes-csi/external-snapshotter/blob/v8.2.0/pkg/sidecar-controller/csi_handler.go#L87

Describe the solution you'd like in detail

VolumeSnapshotClass should support the subDir parameter, just like StorageClass. This would allow snapshots to be stored in the same subdirectory as their corresponding PVCs when using the NFS CSI driver.

Since VolumeSnapshotClass already has a parameters field, this change would require modifications to the CSI driver and would not impact any Kubernetes core components or external-snapshotter behavior.

Proposed Parameter Handling:

The subDir parameter should be processed within the CSI driver to determine the directory where snapshots are stored. Below is an example configuration:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: my-snapshot-class
parameters:
subDir: "snapshots"
  • If subDir is specified, the snapshot will be stored inside the given subdirectory.
  • If not specified, the default behavior (storing in the root directory) remains unchanged.

Describe alternatives you've considered

  • Manually moving snapshots to the desired subdirectory after creation, which is not ideal due to automation and access control limitations. Additionally, the VolumeSnapshotContent.status.snapshotHandle would also need to be manually updated, which is not supported by Kubernetes and can lead to inconsistencies.

Additional context

This feature would be particularly beneficial for this CSI driver, where organizing snapshots in specific directories can improve manageability. Allowing users to specify subDir in VolumeSnapshotClass ensures consistency in snapshot placement and improves manageability.

Additionally, modifying subDir does not affect existing snapshots. Since parameters in VolumeSnapshotClass cannot be modified after creation, users must create a new VolumeSnapshotClass with the desired subDir. Moreover, the snapshotHandle stored in VolumeSnapshotContent.status includes folder information, ensuring that restoring snapshots remains unaffected by subDir changes.

Expected Directory Structure Change

Current Behavior:

/nfs-root/snapshot-<snapshotUID>

Proposed Behavior with subDir: snapshots:

/nfs-root/snapshots/snapshot-<snapshotUID>

bells17 avatar Feb 22 '25 17:02 bells17

/assign

bells17 avatar Mar 03 '25 07:03 bells17

the VolumeSnapshotClass supports share parameter (it's / by default), which means you could create a dedicated subdir in root dir to store snapshot, original snapshot volume id does not contain subDir which means this could be a breaking change if we support subDir in VolumeSnapshotClass since we also need to add subDir into snapshot volume id if we want to support subDir in VolumeSnapshotClass

andyzhangx avatar Mar 17 '25 09:03 andyzhangx

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 15 '25 09:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 15 '25 10:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Aug 14 '25 11:08 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Aug 14 '25 11:08 k8s-ci-robot

the VolumeSnapshotClass supports share parameter (it's / by default), which means you could create a dedicated subdir in root dir to store snapshot, original snapshot volume id does not contain subDir which means this could be a breaking change if we support subDir in VolumeSnapshotClass since we also need to add subDir into snapshot volume id if we want to support subDir in VolumeSnapshotClass

Hi,

I've tried this option but unfortunately I get this error : Message: Failed to check and update snapshot content: failed to take snapshot of the volume nfs.server.fqdn#share#subdir/pvc-nfs-dynamic/pvc-db037c17-42c0-4a33-af6e-fdc8e9cdb753#pvc-db037c17-42c0-4a33-af6e-fdc8e9cdb753#: "rpc error: code = Internal desc = failed to create archive for snapshot: walking source directory: lstat /tmp/pvc-db037c17-42c0-4a33-af6e-fdc8e9cdb753/subdir/pvc-nfs-dynamic/pvc-db037c17-42c0-4a33-af6e-fdc8e9cdb753: no such file or directory"

No issue without share option

csi version : 4.11.0 K8S version : 1.32.5

rudy-l avatar Sep 03 '25 14:09 rudy-l