AWS Data Lifecycle Manager Support
Is your feature request related to a problem? Please describe. There doesn't seem to be an easy way to do automated backups with dynamically provisioned volumes. Specifically, there is no way to attach a dlm lifecycle policy to the volume as the volume handle is not known ahead of time.
Describe the solution you'd like in detail Some sort of metadata/spec fields that I can add to my PVC to configure the DLM policy.
Describe alternatives you've considered Jerryrigging something with kubernetes cron jobs.
/kind feature
@SimonBerens you can define a policy using AWS console or CLI that create snapshots of volumes that have a specific tag, then all EBS persistent volumes (existing and new) created with that tag in the same account and region will automatically be backed up based on the policy. So that you won't have to worry about volume IDs ahead of time. We've seen this is a common workflow for many customers.
Would this work for your use case? If not, can you share more insight on your use case? Thanks.
@SimonBerens you can define a policy using AWS console or CLI that create snapshots of volumes that have a specific tag, then all EBS persistent volumes (existing and new) created with that tag in the same account and region will automatically be backed up based on the policy. So that you won't have to worry about volume IDs ahead of time. We've seen this is a common workflow for many customers.
Would this work for your use case? If not, can you share more insight on your use case? Thanks.
This is a good workflow when defining tags at the StorageClass level. But it requires a dedicated StorageClass per tag to match the DLM policy. Usually K8s users don't have permissions to define a SC. So I could see use cases where different users from even different teams want to use different DLM policy but same SC and want to be able to add their tags by applying them at for example PVC level (where they have permissions to modify) and propagate these tags down to PV and EBS volume.
@ksliu58 Thanks that should work - then I should apply the tags via the storage class as documented in the Tagging docs, right?
@SimonBerens Yes, you can follow the tagging doc you referenced if you want to associate a DLM policy with all volumes created by a StorageClass in a specific namespace.
For example, if you have 3 DLM policies (one for prod, beta, and testing) and wanted to apply the respective DLM policy for all volumes created by a single StorageClass in the appropriate namespaces:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
tagSpecification_1: 'dlm_policy={{ if .PVCNamespace | contains "prod" }}prod{{ else if .PVCNamespace | contains "beta" }}beta{{ else }}testing{{ end }}'
Please do let us know if this solved your use case, or if other aspects of DLM support are needed in the driver.
Yes this solves our case, thank you!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/close
Closing this out as solved.
@ConnorJC3: Closing this issue.
In response to this:
/close
Closing this out as solved.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.