secrets-store-csi-driver
secrets-store-csi-driver copied to clipboard
Get all k/v pairs from endpoint
Motivation
Some applications require a significant amount of configuration that is sensitive. This becomes extremely tedious and adds redundancy and toil where it could be reduced using the same functionality that envFrom uses in kubernetes core as well as established solutions like external-secrets
Describe the solution you'd like 2 Separate requests.
- Make secretKey optional. If the user has no intention of renaming the secret key, nor specifying a GET request solely for that key, it is redundant declarations that add up.
- Allow all keys to be ingested from an endpoint.
A practical example with only 5 k/v pairs currently looks like:
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: my-secret
namespace: default
spec:
provider: vault
parameters:
roleName: "csi-secrets-store"
vaultAddress: https://vault.company.tld
vaultKubernetesMountPath: kubernetes/eks-use1-sre-prod
objects: |
- objectName: PG_DB_PASSWORD
secretKey: PG_DB_PASSWORD
secretPath: kv-v2/data/my-app
- objectName: APP_TOKEN
secretKey: APP_TOKEN
secretPath: kv-v2/data/my-app
- objectName: OAUTH_CLIENT_ID
secretKey: OAUTH_CLIENT_ID
secretPath: kv-v2/data/my-app
- objectName: OAUTH_SECRET
secretKey: OAUTH_SECRET
secretPath: kv-v2/data/my-app
- objectName: SMTP_PASSWORD
secretKey: SMTP_PASSWORD
secretPath: kv-v2/data/my-app
secretObjects:
- type: Opaque
secretName: my-app
data:
- key: PG_DB_PASSWORD
objectName: PG_DB_PASSWORD
- key: APP_TOKEN
objectName: APP_TOKEN
- key: OAUTH_CLIENT_ID
objectName: OAUTH_CLIENT_ID
- key: OAUTH_SECRET
objectName: OAUTH_SECRET
- key: SMTP_PASSWORD
objectName: SMTP_PASSWORD
When all you should really need is
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: my-secret
namespace: default
spec:
provider: vault
parameters:
roleName: "csi-secrets-store"
vaultAddress: https://vault.company.tld
vaultKubernetesMountPath: kubernetes/eks-use1-sre-prod
objectListFrom:
- name: my-app-keys
secretPath: kv-v2/data/my-app
secretObjects:
- type: Opaque
secretName: my-app
dataFrom:
- objectList: my-app-keys
Everything within parameters on a SecretProviderClass is opaque to the driver and interpreted by the individual providers. I think it may be good for providers to evaluate the feasibility of this with their respective APIs. I think I see similar issues open for AWS and Azure at this point:
- https://github.com/aws/secrets-store-csi-driver-provider-aws/issues/9
- https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/357
If there are commonalities between the solutions then we can investigate feasibility of a shared feature of the driver itself.
@tam7t I completely understand that perspective. the secretObjects are part of the driver itself though correct? If an option is not available from the driver like "accept an array/list as an object", passed along from the providers, then they will not implement it as it would "go nowhere" right? Thoughts?
Ah yes, secretObjects is defined by the driver and used for the K8s sync feature to map relative file paths to secret key/values - so changes to allow mapping multiple file paths to keys would require driver changes.
/assign
+1
Looking forward to the detailed design doc so we can discuss some of the nuances this feature may bring.
Specifically, today with the secretObjects as a static list in SPC CR, this controller loops thru secretsObjects and creates secrets with the content in the mounted volume retrieved by the provider from the external secrets stores. There is an assumption that the object (file) exists in the mount, otherwise it will return an error and the reconciler will retry. So if that assumption is no longer valid, how does that controller know when a file doesn't exist is intentional or it should retry the reconcile loop.
With this feature request/proposal, the list of objects is maintained outside of the SPC CR via objectListfrom which makes it harder for the provider to validate if the right object actually exists in the external source or if it has the right permission to access that object. As a result, it makes error handling harder as the mount would succeed and application may fail silently. I think there are things we can add to make sure this is addressed. Let's discuss this more in the design doc.
Perhaps not directly related, but I wonder if inspiration could be drawn from the Vault Injector Sidecar's options, such as in the use of template files to extract and format a vault secret into a format that most applications can use natively (such as a .env file or a bash script with exported env vars).
https://www.vaultproject.io/docs/platform/k8s/injector#secret-templates
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
What is the status of this issue? Would really love this feature.
Is there any update on this issue?
This feature would reduce security concerns massively and will be widely celebrated!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This really shouldn’t be a stale issue.
/remove-lifecycle stale
We're seeing strong traction for this request in the AWS provider (issue)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Has it been decided if this is entirely up to the providers to implement, or if changes are needed in this repo first?
Provider feature requests:
- AWS
- https://github.com/aws/secrets-store-csi-driver-provider-aws/issues/9
- https://github.com/aws/secrets-store-csi-driver-provider-aws/issues/46
- Azure
- https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/357
- GCP
- https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp/issues/181
- Vault
- https://github.com/hashicorp/vault-csi-provider/issues/114
- https://github.com/hashicorp/vault-csi-provider/issues/192
Hi, is there any update regarding this suggested feature?
Is there any update on this issue? It would really help for deployments made using reutilizable code to create common K8s entities (namespaces, secrets stores, etc).
The workaround is currently a bit hacky (at least using AWS driver)
Would like to bump this issue as it would be a really great feature to have
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale