cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[csi-cinder-plugin] Support volume basic encryption
/kind feature
What happened:
Currently volumes can be encrypted if the required features are set up in OpenStack (and the Volume Type defined at the StorageClass is correct) but the user of an csi-cinder-plugin deployment does not know if the PersistentVolume is really encrypted or not from within k8s.
What you expected to happen:
This feature request is most likely one of two requesting support to set a parameter at the StorageClass to validate if volumes created are flagged as encrypted in the API response.
How to reproduce it:
Create a pvc for a storageclass named encryptedvolume without the correct Volume Type (default LUKS) set. Volume will not be encrypted but handled correctly by the CSI driver. With the encrypted parameter set (if PR is accepted) an error will be showed that the volume should be encrypted but is not at the block storage layer.
Anything else we need to know?: Another issue will be created that requests support "bring your own key" approach to both OpenStack and the CSI driver. It's part of an effort to enhance the encryption support in OpenStack and k8s as part of the Sovereign Cloud Stack. I'll reach out to OpenStack for that first and create another PR once support is implemented.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Hello,
Would a pull request be welcome for this issue?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.