secrets-store-csi-driver icon indicating copy to clipboard operation
secrets-store-csi-driver copied to clipboard

Get all k/v pairs from endpoint

Open dirtycajunrice opened this issue 4 years ago • 46 comments

Motivation Some applications require a significant amount of configuration that is sensitive. This becomes extremely tedious and adds redundancy and toil where it could be reduced using the same functionality that envFrom uses in kubernetes core as well as established solutions like external-secrets

Describe the solution you'd like 2 Separate requests.

  1. Make secretKey optional. If the user has no intention of renaming the secret key, nor specifying a GET request solely for that key, it is redundant declarations that add up.
  2. Allow all keys to be ingested from an endpoint.

A practical example with only 5 k/v pairs currently looks like:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: my-secret
  namespace: default
spec:
  provider: vault
  parameters:
    roleName: "csi-secrets-store"
    vaultAddress: https://vault.company.tld
    vaultKubernetesMountPath: kubernetes/eks-use1-sre-prod
    objects: |
      - objectName: PG_DB_PASSWORD
        secretKey: PG_DB_PASSWORD
        secretPath: kv-v2/data/my-app
      - objectName: APP_TOKEN
        secretKey: APP_TOKEN
        secretPath: kv-v2/data/my-app
      - objectName: OAUTH_CLIENT_ID
        secretKey: OAUTH_CLIENT_ID
        secretPath: kv-v2/data/my-app
      - objectName: OAUTH_SECRET
        secretKey: OAUTH_SECRET
        secretPath: kv-v2/data/my-app
      - objectName: SMTP_PASSWORD
        secretKey: SMTP_PASSWORD
        secretPath: kv-v2/data/my-app
  secretObjects:
    - type: Opaque
      secretName: my-app
      data:
        - key: PG_DB_PASSWORD
          objectName: PG_DB_PASSWORD
        - key: APP_TOKEN
          objectName: APP_TOKEN
        - key: OAUTH_CLIENT_ID
          objectName: OAUTH_CLIENT_ID
        - key: OAUTH_SECRET
          objectName: OAUTH_SECRET
        - key: SMTP_PASSWORD
          objectName: SMTP_PASSWORD

When all you should really need is

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: my-secret
  namespace: default
spec:
  provider: vault
  parameters:
    roleName: "csi-secrets-store"
    vaultAddress: https://vault.company.tld
    vaultKubernetesMountPath: kubernetes/eks-use1-sre-prod
    objectListFrom: 
      - name: my-app-keys
        secretPath: kv-v2/data/my-app
  secretObjects:
    - type: Opaque
      secretName: my-app
      dataFrom:
        - objectList: my-app-keys

dirtycajunrice avatar May 06 '21 08:05 dirtycajunrice

Everything within parameters on a SecretProviderClass is opaque to the driver and interpreted by the individual providers. I think it may be good for providers to evaluate the feasibility of this with their respective APIs. I think I see similar issues open for AWS and Azure at this point:

  • https://github.com/aws/secrets-store-csi-driver-provider-aws/issues/9
  • https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/357

If there are commonalities between the solutions then we can investigate feasibility of a shared feature of the driver itself.

tam7t avatar May 11 '21 18:05 tam7t

@tam7t I completely understand that perspective. the secretObjects are part of the driver itself though correct? If an option is not available from the driver like "accept an array/list as an object", passed along from the providers, then they will not implement it as it would "go nowhere" right? Thoughts?

dirtycajunrice avatar May 12 '21 01:05 dirtycajunrice

Ah yes, secretObjects is defined by the driver and used for the K8s sync feature to map relative file paths to secret key/values - so changes to allow mapping multiple file paths to keys would require driver changes.

tam7t avatar May 12 '21 17:05 tam7t

/assign

manedurphy avatar Jun 04 '21 19:06 manedurphy

+1

Looking forward to the detailed design doc so we can discuss some of the nuances this feature may bring.

Specifically, today with the secretObjects as a static list in SPC CR, this controller loops thru secretsObjects and creates secrets with the content in the mounted volume retrieved by the provider from the external secrets stores. There is an assumption that the object (file) exists in the mount, otherwise it will return an error and the reconciler will retry. So if that assumption is no longer valid, how does that controller know when a file doesn't exist is intentional or it should retry the reconcile loop.

With this feature request/proposal, the list of objects is maintained outside of the SPC CR via objectListfrom which makes it harder for the provider to validate if the right object actually exists in the external source or if it has the right permission to access that object. As a result, it makes error handling harder as the mount would succeed and application may fail silently. I think there are things we can add to make sure this is addressed. Let's discuss this more in the design doc.

ritazh avatar Jun 10 '21 17:06 ritazh

Perhaps not directly related, but I wonder if inspiration could be drawn from the Vault Injector Sidecar's options, such as in the use of template files to extract and format a vault secret into a format that most applications can use natively (such as a .env file or a bash script with exported env vars).

https://www.vaultproject.io/docs/platform/k8s/injector#secret-templates

jacobbeasley avatar Sep 22 '21 16:09 jacobbeasley

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 21 '21 17:12 k8s-triage-robot

/remove-lifecycle stale

aramase avatar Jan 03 '22 17:01 aramase

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 03 '22 17:04 k8s-triage-robot

/remove-lifecycle stale

aramase avatar Apr 04 '22 06:04 aramase

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 03 '22 06:07 k8s-triage-robot

/remove-lifecycle stale

nilekhc avatar Jul 05 '22 20:07 nilekhc

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 03 '22 20:10 k8s-triage-robot

/remove-lifecycle stale

aramase avatar Oct 03 '22 20:10 aramase

What is the status of this issue? Would really love this feature.

patstrom avatar Nov 10 '22 09:11 patstrom

Is there any update on this issue?

Bowser1704 avatar Nov 25 '22 07:11 Bowser1704

This feature would reduce security concerns massively and will be widely celebrated!

agates4 avatar Jan 03 '23 20:01 agates4

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 03 '23 20:04 k8s-triage-robot

This really shouldn’t be a stale issue.

agates4 avatar Apr 03 '23 20:04 agates4

/remove-lifecycle stale

simonmarty avatar Apr 10 '23 18:04 simonmarty

We're seeing strong traction for this request in the AWS provider (issue)

simonmarty avatar Apr 10 '23 18:04 simonmarty

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 09 '23 19:07 k8s-triage-robot

/remove-lifecycle stale

agates4 avatar Jul 09 '23 19:07 agates4

Has it been decided if this is entirely up to the providers to implement, or if changes are needed in this repo first?

Provider feature requests:

  • AWS
    • https://github.com/aws/secrets-store-csi-driver-provider-aws/issues/9
    • https://github.com/aws/secrets-store-csi-driver-provider-aws/issues/46
  • Azure
    • https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/357
  • GCP
    • https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp/issues/181
  • Vault
    • https://github.com/hashicorp/vault-csi-provider/issues/114
    • https://github.com/hashicorp/vault-csi-provider/issues/192

joebowbeer avatar Aug 04 '23 00:08 joebowbeer

Hi, is there any update regarding this suggested feature?

msitworld avatar Aug 09 '23 13:08 msitworld

Is there any update on this issue? It would really help for deployments made using reutilizable code to create common K8s entities (namespaces, secrets stores, etc).

The workaround is currently a bit hacky (at least using AWS driver)

odarriba avatar Oct 05 '23 09:10 odarriba

Would like to bump this issue as it would be a really great feature to have

zarcen avatar Nov 10 '23 21:11 zarcen

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 08 '24 22:02 k8s-triage-robot

/remove-lifecycle stale

pierluigilenoci avatar Feb 12 '24 13:02 pierluigilenoci