secrets-store-csi-driver icon indicating copy to clipboard operation
secrets-store-csi-driver copied to clipboard

To mount a secret as an environment variable I need to mount the whole volume of secrets to the pod (Why!? How better?)

Open ovidiubuligan opened this issue 4 years ago • 11 comments

Describe the solution you'd like [A clear and concise description of what you want to happen.]

This is a standard way as per documentation mount secrets as a volume . https://github.com/kubernetes-sigs/secrets-store-csi-driver/blob/main/test/bats/tests/vault/pod-vault-inline-volume-secretproviderclass.yaml

Quite verbose and also mounts all secrets declared in the secretsproviderclass. Eventhough the pod needs only one of those.

Using the secrets as environment variables is nicer since we don't bring all the secrets that the pod doesn't need:

spec:
  containers:
  - image: k8s.gcr.io/e2e-test-images/busybox:1.29
    name: busybox
    command:
    - "/bin/sleep"
    - "10000"
    env:
    - name: SECRET_USERNAME
      valueFrom:
        secretKeyRef:
          name: foosecret
          key: username

The pod now only sees one secret that it needs and doesn't expose the rest of them that it doesn't use.

The problem with this is that it doesn't trigger syncing when a new secret is added . It needs the whole secrets-store-inline volume mount besides the Env var secret ref ,and by mounting secrets-store-inline , the pod will have access to the whole secretproviderclasse's secrets , which is not good .

How can we solve this without creating combinations of each pod required secrets into secretproviderclasses? Environment:

  • Secrets Store CSI Driver version: 1.0.0.:
  • Kubernetes version: 1.21:

ovidiubuligan avatar Nov 24 '21 14:11 ovidiubuligan

Hi @ovidiubuligan, thanks for opening issue. The way Secrets Store CSI driver works is through mounting CSI volume. SecretsProviderClass(SPC) is a way to tell driver what secrets are needed in the Pod. You could specify only those secrets in SPC which are required.

CSI volume mount is the primary function of this project. We also provide sync as k8s secrets, as an additional feature on top of volume mount. Only secrets mounted on volume can be mirrored as k8s secrets and then can be referenced as env vars.

nilekhc avatar Nov 24 '21 18:11 nilekhc

@nilekhc thank you for your answer and giving all the context.

Mounting secrets as environment variables is a recommended pattern (12 factor app), making it common to mount secrets as files in many cases. Do you think it would be possible to make this functionality work without mounting it as a CSI volume?

txomon avatar Dec 08 '21 14:12 txomon

Hi, I found this a bit verbose too, mounting a volume to get a secret in order to get my environment variable set.

But I think this is probably the best way for this work. The main issue I see with providing secrets is that (certainly in the case of AWS) you would lose the ability to use the pods service account to set IAM permissions for the secret. My understanding is (and this may be wrong) that by mounting the volume in the container it's executing the fetch secret under the context of the service account.

barrydobson avatar Jan 13 '22 08:01 barrydobson

@nilekhc thank you for your answer and giving all the context.

Mounting secrets as environment variables is a recommended pattern (12 factor app), making it common to mount secrets as files in many cases. Do you think it would be possible to make this functionality work without mounting it as a CSI volume?

Hello @txomon, there is a similar ask from other users and we are assessing this feature request with community. We are reviewing proposal with SIG-Auth . Please keep an eye on this.

nilekhc avatar Jan 13 '22 11:01 nilekhc

Hi, I found this a bit verbose too, mounting a volume to get a secret in order to get my environment variable set.

But I think this is probably the best way for this work. The main issue I see with providing secrets is that (certainly in the case of AWS) you would lose the ability to use the pods service account to set IAM permissions for the secret. My understanding is (and this may be wrong) that by mounting the volume in the container it's executing the fetch secret under the context of the service account.

Hi @barrydobson, I am not quite familiar with AWS IAM permissions, but methods for retrieving secrets will vary from provider to provider (Azure, GCP, AWS etc.) In any case, it uses access methods specified in SecrertsProviderClass.

nilekhc avatar Jan 13 '22 11:01 nilekhc

But I think this is probably the best way for this work. The main issue I see with providing secrets is that (certainly in the case of AWS) you would lose the ability to use the pods service account to set IAM permissions for the secret. My understanding is (and this may be wrong) that by mounting the volume in the container it's executing the fetch secret under the context of the service account.

@barrydobson The driver uses the pod identity to access the secret. So for AWS it would use the workload pods service account token which has the IAM permissions for the secret.

aramase avatar Jan 13 '22 17:01 aramase

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 13 '22 18:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 13 '22 18:05 k8s-triage-robot

/remove-lifecycle rotten

txomon avatar May 13 '22 19:05 txomon

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 11 '22 20:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 10 '22 20:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 10 '22 21:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 10 '22 21:10 k8s-ci-robot