cloud-provider-openstack
cloud-provider-openstack copied to clipboard
CSI for Openstack Does Not Support Volumes Encrypted by Barbican / TYPE LUKS
Hello - I have confirmed that attach will not succeed on LUKS type - two issues:
provisioner : Succeeds but looks to confirm a known block type to it and fails on confirm but succeeds to create it.
attacher - fails to mount it - also cant find a known / expected type.
Request: How do we has attached dynamically pull the relevant secret from barbican and store in kubernetes secrets for use to encrypt / decrypt.
At current i had to break my default which is bad of encrypted drives in openstack to fix this, users should not have to disable defaults on the whole openstack cluster - or we should be able to state TYPE of volume in the driver we want to request as a workaround but not a solution but still permanent to give flexibility / choice.
I understand that we can encrypt ourselves in K8S but thats not the point of this change / goal etc.
Please advise
I think it's a function gap , I knew we have barbican usage in ingresss and kms but CSI we seems don't have this at all
I am not expert on this so can you help provide API/CLI or any other things might be helpful to understand the use case above? e.g some flow might be helpful a lot
I agree as well it's a function gap, but am happy that at least for now I can pass type. People should be able to use external decryptor - it's likely the safest way vs storing the KEYs in k8s in case of a breach .Here is the procedure of what to do, (API not included but that part is easiest)
https://serverfault.com/questions/1052979/luks-encryption-for-mounted-disk-how-to-decrypt-cinder-volume
Here is the barbican API:
https://docs.openstack.org/barbican/latest/api/reference/secrets.html
The cinder API would be the same really except you read the key field , decrypt and return as ok after.
On Mon, May 23, 2022, 8:13 AM ji chen @.***> wrote:
I think it's a function gap , I knew we have barbican usage in ingresss and kms but CSI we seems don't have this at all
I am not expert on this so can you help provide API/CLI or any other things might be helpful to understand the use case above? e.g some flow might be helpful a lot
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/cloud-provider-openstack/issues/1881#issuecomment-1134596048, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFSSJU7CCRBSXCE3N6INVRTVLNY6BANCNFSM5WQTAUJQ . You are receiving this because you authored the thread.Message ID: @.***>
yes, saw the history https://github.com/kubernetes/cloud-provider-openstack/issues/1864 now and will keep read this , in case any gap or Doc change, will discuss and propose PR, thanks for all the detailed info~
Not a problem - thank you for seeing the issue through with me. For use case, in any enterprise (not where I am using this but I do work for a large one) we would never ship non encrypted drives on failure back to a vendor so as such encrypted is a default for many large orgs. For my personal use case, I have interest in cyber security products, encrypted drives are a must for me as well.
I see two ways to achieve this goal K8S wise anyways:
- Allow for external KMS (barbican is most common w OpenStack) <-- This is what I use
- Internally (to K8s) encrypt volumes (also a good option) using various KMS's (internal <- A native plug-in or external <- Barbican in this case)
Thanks for the detailed info ,so maybe you can assign yourself on this @nashford77 as you seems already have a solution :)
Hello - Sorry, have been busy - I will give it a try shortly. It would be a great win if it works feature wise
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.