csi-driver-nfs
csi-driver-nfs copied to clipboard
[Feature request] Add support for Volume Snapshots
Is your feature request related to a problem?/Why is this needed This would be nice to have support for the VolumeSnapshots API in this project for little offices where the snapshots could be automated with tools like SnapScheduler or similar
Describe the solution you'd like in detail Simply add support for the API
Describe alternatives you've considered
n/a
Additional context Little offices usually use TrueNAS as a NAS solution, but democratic-csi requires to install packages in the nodes OS. That is not a bad approach but not the best. For doing the same, this project can work even on a Kind cluster for testing, so it is more user friendly for end users
hi @achetronic do you have more details about VolumeSnapshots API? Thanks.
hi @achetronic do you have more details about VolumeSnapshots API? Thanks.
Hello @andyzhangx I am talking about this API which is used to do snapshots using the CSI driver, but it needs several things in Kubernetes, the first is the CSI support for the csi-snapshotter and the second is to deploy external-snapshotter
These snapshots can be automated to be triggeres periodically with some little operators and PVCs can be created from them.
What do you think about?
hi @achetronic do you have more details about VolumeSnapshots API? Thanks.
Hello @andyzhangx I am talking about this API which is used to do snapshots using the CSI driver, but it needs several things in Kubernetes, the first is the CSI support for the
csi-snapshotterand the second is to deployexternal-snapshotterThese snapshots can be automated to be triggeres periodically with some little operators and PVCs can be created from them.
What do you think about?
@achetronic that's not enough, do you know how to take snapshot on an NFS server? this requires the underlying snapshot functionality support.
one implementation is using tar command to take "snapshot" of current volume directory, could refer to
https://github.com/kubernetes-csi/csi-driver-host-path/blob/f5fd42e78f3884ed6b780d23c1c43798a0d29d35/pkg/hostpath/controllerserver.go#L555
one implementation is using
tarcommand to take "snapshot" of current volume directory, could refer to https://github.com/kubernetes-csi/csi-driver-host-path/blob/f5fd42e78f3884ed6b780d23c1c43798a0d29d35/pkg/hostpath/controllerserver.go#L555
I have been testing different solutions, such as the given by democratic-csi project for a TrueNAS on this stuff and I don't like the possible security risks involved on their approach.
I reviewed you idea better and I think that having a controller for snapshots, and snapshots being stored on the same NFS server, in a different subpath as .tar files could be a great solution. This would bring the accurate expected behavior for VolumeSnapshot API and it would be simple. What do you think?
Yes please add this functionality. I would really benefit from it.
I also would love to have this possibility. K10 and Trilio are backup solutions that depend on CSI drivers that can take snapshots. At the moment when using the nfs csi driver, backups are not possible as the csi driver does not support the snapshots. Anyone already working on this?
Cheers, Alex
Hello @ahachmann I have no time to work on this feature so I proposed it to the maintainers, but no idea of the state of the art right now
could refer to the hostpath csi driver snapshot implementation: https://github.com/kubernetes-csi/csi-driver-host-path/blob/9be5dd74a7fc2436c4334820156056b74821998e/pkg/hostpath/controllerserver.go#L497-L585 any volunteer?
seems it's not that complex implementation, let me try implementing it, stay tuned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
seems it's not that complex implementation, let me try implementing it, stay tuned.
@andyzhangx : Cancelled? Ran into trouble you would share?
seems it's not that complex implementation, let me try implementing it, stay tuned.
Cancelled? Ran into trouble you would share?
I think this is because the lack of developers on the k8s projects. We could join and code it on ourselves to share it?
seems it's not that complex implementation, let me try implementing it, stay tuned.
Cancelled? Ran into trouble you would share?
I think this is because the lack of developers on the k8s projects. We could join and code it on ourselves to share it?
sorry I don't have the bandwidth now, pls feel free to contribute and I could also review the PR, thanks.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
looks like the feature request existed earlier in https://github.com/kubernetes-csi/csi-driver-nfs/issues/31, proposing to closing this as a duplicate. Might be easier to track this under a single ticket.
looks like the feature request existed earlier in #31, proposing to closing this as a duplicate. Might be easier to track this under a single ticket.
I agree :)