enhancements
                                
                                
                                
                                    enhancements copied to clipboard
                            
                            
                            
                        ConfigMap / Secret Orchestration
This change adds a KEP for ConfigMap / Secret Orchestration support.
Hi @kfox1111. Thanks for your PR.
I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: kfox1111 To fully approve this pull request, please assign additional approvers. We suggest the following additional approver: mattfarina
If they are not already assigned, you can assign the PR to them by writing /assign @mattfarina in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/ok-to-test
@JoelSpeed
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Yeah, everyone should be notified that no body cares about this issue for the last 90d.
@Bessonov We need a broken heart reaction.
:broken_heart:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
I don't know why, but github is letting me respond to some comments and not others... responding to the rest here:
Why does watch require snapshot to be true? Also, wouldn't it make more sense to opt into this on workload-level (Deployment, DS, Statefulset) because if immutability is wanted, we already have the immutable configmap/secret fature?
The way watch is working in this context, without a snapshot a watch wouldn't have anything to trigger off of in the described implementation if there wasn't a snapshotted new configmap name to push into the new replicaset.
It can not be made to work on pods, pod volumes are immutable. I would also argue it should not be made on pods, because that doesn't allow to have any control in case the new revision causes issue.
I think more or less the same is true for Jobs, you should use immutable configmaps/secrets for them to get deterministic results
I agree on pods and jobs. Thats why it was stated it only works on deployments, statefulsets and daemonsets. Only those three objects have a concept of "version" of the podtemplate where you can roll forwards and backwards. Its these "versions" that need immutable configmaps/secrets during the life of that "version". The simplest thing on the user I can think of is to version the configmap at the same time the podtempate is versioned and keep the lifecycle the same. ReplicaSet and configmaps get created at the same time, and get deleted at the same time.
Maybe an example will help. User uploads configmap foo and then deployment foo. its set as watched and snapshotted. when the deployment is created, configmap foo is copied to immutable configmap foo-1 and replicaset foo-1 is created pointing at configmap foo-1 in the podtempate
User then edits configmap foo deployment notices it, copies configmap foo to immutable configmap foo-2 and creates replicaset foo-2 pointing at configmap foo-2.
All the pods then in replicaset foo-1 are always consistent, and all the pods in replicaset foo-2 are always consistent. You can roll forwards and back between foo-1 and foo-2 and it always works consistently. This doesn't work consistently today unless you are very careful and add a lot of manual, error prone steps.
Then if the user deletes replicaset foo-1 cause they are done with it (or the system does for them), configmap foo-1 gets garbage collected too. If the user deletes the deployment, also all the snapshotted configmaps associated with the deployment go away too.
So, for the user, the cognitive burden is just, one configap with their config, and one deployment for orchestrating the app. Rolling forward/back just works. That is how they think it works when they first go into Kubernetes and then find out its much more complected today then that.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
For posterity, one workaround for "I want my pod to rotate when config changes" is to put the config in a pod annotation and then use the downward api to get the content of that annotation into an env var or a file. I am very certain this was never intended to be used like that, but its the least bad workaround I am aware of.
Wouldn’t it then be simpler to put the config into an environment variable?
On Wed, Jan 13, 2021 at 10:20 Alvaro Aleman [email protected] wrote:
For posterity, one workaround for "I want my pod to rotate when config changes" is to put the config in a pod annotation and then use the downward api to get the content of that annotation into an env var or a file. I am very certain this was never intended to be used like that, but its the least bad workaround I am ware of.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/enhancements/pull/948#issuecomment-759596487, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABB6TFGEBIKCOII3Y4TUDVLSZXI5PANCNFSM4HFAHQ5Q .
--
Dharma Bellamkonda
Wouldn’t it then be simpler to put the config into an environment variable?
Yeah, but many applications do not support reading their config from an env var, the downward api also allows to get it into a file
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Still a problem.
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Still a problem.
/remove-lifecycle rotten
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Still a problem.
/remove-lifecycle rotten
/cc
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Mark this issue or PR as fresh with 
/remove-lifecycle stale - Mark this issue or PR as rotten with 
/lifecycle rotten - Close this issue or PR with 
/close - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Mark this issue or PR as fresh with 
/remove-lifecycle stale - Mark this issue or PR as rotten with 
/lifecycle rotten - Close this issue or PR with 
/close - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Mark this issue or PR as fresh with 
/remove-lifecycle stale - Mark this issue or PR as rotten with 
/lifecycle rotten - Close this issue or PR with 
/close - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale