secrets-store-csi-driver
secrets-store-csi-driver copied to clipboard
Secret Auto Rotation not working for succeeded and failed pods
What steps did you take and what happened:
- Installed the secrets store CSI driver with secret auto rotation enabled
- Created a
SecretProviderClassfor secrets from AWS SecretsManager with Kubernetes Secret sync enabled - Used the
SecretProviderClassin aCronJob
The first time the CronJob is triggered, everything works as expected. A Kubernetes Secret is created with the secret value from the AWS SecretsManager and the value can be used in the container environment variables.
If the secret value is now changed in the AWS SecretsManager, the next time the CronJob is triggered, the environment variable in the container is still set to the old value, as the value in the Kubernetes Secret was not updated.
What did you expect to happen:
Ideally the auto rotation would have updated the Kubernetes Secret to the current value from the AWS SecretsManager so that the container always has the latest secret value available.
Anything else you would like to add:
A workaround for this issue is to set successfulJobsHistoryLimit and failedJobsHistoryLimit in the CronJob spec to 0. That way, after a Job finishes, no succeeded or failed Pods belonging to the Job will remain in the cluster, which allows the secrets store CSI driver to delete the Kubernetes Secret and recreate it the next time the CronJob is triggered.
Looking at the code, it seems like this behaviour is intentional. Not sure why the auto rotation is skipped for succeeded and failed Pods, but for the use case described above it could make some difficulties.
Which provider are you using: AWS
Environment:
- Secrets Store CSI Driver version: (use the image tag): v1.3.3
- Kubernetes version: (use
kubectl version): v1.26.5
We are running in the same problem on our setup. AWS EKS and Cronjobs that try to access secrets that are rotated every N Days. Currently, our only solution is to remove the secrets once the AWS Secrets have rotated to force regeneration.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen