cluster-api-provider-cloudstack icon indicating copy to clipboard operation
cluster-api-provider-cloudstack copied to clipboard

Potential data loss when used in combination with CloudStack CSI driver

Open hrak opened this issue 1 year ago • 1 comments
trafficstars

/kind bug

What steps did you take and what happened:

When CAPC is used in combination with the CloudStack CSI driver, there is a potential for data loss when a CloudStackMachine gets destroyed (f.e. in a rollout or scaling-down scenario).

CAPC offers the possibility to create an additional data volume upon creation of a CloudStackMachine by passing a spec.diskOffering.id or spec.diskOffering.name. This is analogous to the behavior of deployVirtualMachine, where a data volume gets created when a diskofferingid and size are passed.

These volumes also need to be cleaned up again, and so DestroyVMInstance does a call to listVolumes to get a list of volumes associated with the VM and passes them using SetVolumeids to the call to DestroyVirtualMachine.

When using the CloudStack CSI driver, PVC's are CloudStack volumes attached to the VM, and since there is no distinction made on which volumes get deleted, any attached PVC at the time of the deletion of the CloudStackMachine also gets instantly destroyed and expunged, leading to data loss.

What did you expect to happen:

Some form of distinction between automatically created datavolumes made during deployVirtualMachine and datadisks attached to the VM at a later stage. Tagging could be an option. Right now the only way i figured out how to make the distinction is that the automatically created volume on deployVirtualMachine is always named DATA-<someID> as can be seen in CloudStack.

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api-provider-cloudstack version: latest
  • Kubernetes version: (use kubectl version): 1.29
  • CSI driver 0.6.0
  • OS (e.g. from /etc/os-release):

hrak avatar Jul 08 '24 13:07 hrak

Fair to say it’s an unsupported combination, so probably not a CAPC bug but an improvement to have it work with the CSI driver. cc @Pearl1594 @vishesh92 @weizhouapache

rohityadavcloud avatar Jul 08 '24 13:07 rohityadavcloud

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 06 '24 13:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 05 '24 14:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 05 '24 15:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Dec 05 '24 15:12 k8s-ci-robot