cluster-api-provider-azure
cluster-api-provider-azure copied to clipboard
Secure sensitive bootstrap data
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.]
CAPI generates sensitive cluster data (such as private keys) for the apiserver, etcd, etc. These are stored as secrets in Kubernetes. The kubeadm bootstrapper copies the contents of the secrets into bootstrap data in the KubeadmConfig resource, which is then copied into the Machine resource. from https://github.com/kubernetes-sigs/cluster-api/issues/1739
CAPZ uses this bootstrap data as the user data for the VM/VMSS. If a user has read-only access to the VM via Azure API, this could grant them access to the user data, and therefore access to the sensitive data.
Azure recommends not placing any sensitive values in custom data https://docs.microsoft.com/en-us/azure/virtual-machines/custom-data#can-i-place-sensitive-values-in-custom-data.
We should secure the bootstrap data, for example by using Azure keyvault storage to store the data such that only the VM has access to that data, but not a user that has access to the VM.
/priority important-longterm /milestone next
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- cluster-api-provider-azure version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
for example by using Azure keyvault storage to store the data such that only the VM has access to that data, but not a user that has access to the VM.
By "user that has access to the VM", do you mean access to the physical VM or the azure resource via the ARM api? I believe the files would still need to have to live on the VM it's self for kubeadm to do it's job?
By "user that has access to the VM", do you mean access to the physical VM or the azure resource via the ARM api? I believe the files would still need to have to live on the VM it's self for kubeadm to do it's job?
The latter. Updated that sentence for clarity, thanks!
/assign
/milestone next
/assign
@shysank prefer starting work on this I would recommend reaching out to @randomvariable because some of this work might overlap with https://github.com/kubernetes-sigs/cluster-api/issues/3761
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale /lifecycle frozen
Handled as part of https://github.com/kubernetes-sigs/cluster-api/pull/4219
/assign sonasingh46
/milestone v1.6
@sonasingh46 - where are we at with this one?