cluster-api-provider-cloudstack
cluster-api-provider-cloudstack copied to clipboard
Potential data loss when used in combination with CloudStack CSI driver
/kind bug
What steps did you take and what happened:
When CAPC is used in combination with the CloudStack CSI driver, there is a potential for data loss when a CloudStackMachine gets destroyed (f.e. in a rollout or scaling-down scenario).
CAPC offers the possibility to create an additional data volume upon creation of a CloudStackMachine by passing a spec.diskOffering.id or spec.diskOffering.name. This is analogous to the behavior of deployVirtualMachine, where a data volume gets created when a diskofferingid and size are passed.
These volumes also need to be cleaned up again, and so DestroyVMInstance does a call to listVolumes to get a list of volumes associated with the VM and passes them using SetVolumeids to the call to DestroyVirtualMachine.
When using the CloudStack CSI driver, PVC's are CloudStack volumes attached to the VM, and since there is no distinction made on which volumes get deleted, any attached PVC at the time of the deletion of the CloudStackMachine also gets instantly destroyed and expunged, leading to data loss.
What did you expect to happen:
Some form of distinction between automatically created datavolumes made during deployVirtualMachine and datadisks attached to the VM at a later stage. Tagging could be an option. Right now the only way i figured out how to make the distinction is that the automatically created volume on deployVirtualMachine is always named DATA-<someID> as can be seen in CloudStack.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api-provider-cloudstack version: latest
- Kubernetes version: (use
kubectl version): 1.29 - CSI driver 0.6.0
- OS (e.g. from
/etc/os-release):
Fair to say it’s an unsupported combination, so probably not a CAPC bug but an improvement to have it work with the CSI driver. cc @Pearl1594 @vishesh92 @weizhouapache
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.