argo-rollouts
argo-rollouts copied to clipboard
Support triggering rollout when referenced ConfigMaps and/or Secrets change
Summary
Support for optionally triggering a rollout when one or more referenced ConfigMaps and/or Secrets change.
Use Cases
Deployments can often be config changes only. It would be useful if a rollout could optionally happen when underlying ConfigMaps and/or Secrets used by the Deployment (as volume mounts, environment vars) are changed.
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.
A common technique to solving this is:
- helm - hash contents of configmap/secret and include it as an annotation in the pod.
- kustomize - use kustomize configmap/secret generators so that the name of the configmap/secret incorporates a hash of the contents of the config
I would be interested in seeing first-class support for config changes in Argo Rollouts similar to how Flagger handles it. While I'm not fond of requiring the same config on basically every application, my main complaint with the status quo is the diff it generates. When using a hash to create a new configmap/secret for every change to it's contents, we're stuck with a useless Argo CD diff of the entire configmap/secret being added/removed.
I've been thinking about this some more. I think the way I would solve this, is with a simple controller that monitors ConfigMaps and Secrets. These objects would be annotated to have back-references to a rollout (or even deployment) which would need to be redeployed upon change. For example:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
annotations:
redeploy-on-update: rollout.argoproj.io/guestbook
data:
foo: bar
The controller would continuously watch configmaps and secrets. When these objects are updated, it would inject a hash of the configmap into an annotation of the referenced rollout or deployment pod template. e.g.:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: guestbook
spec:
template:
metadata:
annotations:
configmap.my-config.hash: abcd1234
Because of the change in the spec.template.metadata.annotations
, the rollout (or deployment) would then go through the normal update process.
The beauty of this approach is that this controller could operate standalone, and would even work with Deployments. In other words, non-rollout users of Argo CD and Deployments would benefit from this.
I just came across a project which took a slightly different but similar approach and already has built-in support for Argo Rollouts! https://github.com/stakater/Reloader
Has anyone tried using reloader with Argo Rollouts?
Looks like their implementation is tied to rollout releases: https://github.com/stakater/Reloader/issues/232
I think we should improve Rollout support in Reloader so that it is no longer tightly coupled to Rollout versions
https://github.com/argoproj/argo-rollouts/issues/958#issuecomment-853460052 I like this proposal because it would show a clean diff between old and new config and trigger a rollout for config changes.
I just tried skataker Reloader and it worked as billed: it ran a rolling update. I would much prefer a solution that triggers a rollout and goes through the configured update process. As an added bonus your proposal should result in a pleasant diff, which would be a major improvement over the configmap name with hash solution mentioned here https://github.com/argoproj/argo-rollouts/issues/958#issuecomment-778545562.
@jessesuen we have this same need....we're planning on file mounted configmaps for our needs, but need seamless roll fwd/back w/ argo rollouts, and the has option is super attractive to us.
A common technique to solving this is:
- helm - hash contents of configmap/secret and include it as an annotation in the pod.
- kustomize - use kustomize configmap/secret generators so that the name of the configmap/secret incorporates a hash of the contents of the config
There is an issue with this approach when combined with ArgoCD with autosync: prune
turned on because:
- ArgoCD will prune the old ConfigMap on sync
- The stable deployment is still referencing the old ConfigMap, which is now no longer exist
Now there are several things can happen:
- If we use HPA with Canary strategy and HPA increases the number of replicas, Argo Rollouts will scale up the stable replica => new pods will fail to create because of the invalid ConfigMap reference.
- If we abort the rollout, the same thing could happen as the old replica is still referencing the old ConfigMap
What do you think @jessesuen ? What is your suggestion to overcome this limitation/issue?
Edit 1: After more investigation, found this https://github.com/argoproj/argo-cd/issues/1629 which is exactly what I described above but the problem is not complete until this is done https://github.com/argoproj/argo-cd/issues/1636
Edit 2: For anyone having the same question and need a workaround, this works for us now by adding these 2 annotations to the configmap generated by Kustomize or Helm
annotations:
argocd.argoproj.io/compare-options: IgnoreExtraneous
argocd.argoproj.io/sync-options: Prune=false
If you are using configMapGenerator
or secretGenerator
, you can add these lines to kustomization.yaml
:
generatorOptions:
annotations:
argocd.argoproj.io/compare-options: IgnoreExtraneous
argocd.argoproj.io/sync-options: Prune=false
Keep in mind that ArgoCD won't prune old ConfigMap(s) anymore and it will start to pile up if you change the ConfigMap a lot. But I guess once in a while, you can prune them manually or write an argocd hook job to clean them up when the sync is complete (= rollout is promoted)
@tobernguyen have you tried PruneLast (on its own, without IgnoreExtraneous
)? We currently append a SHA to our configmap names. The reference change in the Rollout spec triggers a rollout. After the rollout is complete, ArgoCD prunes the old configmap.
@tobernguyen have you tried PruneLast (on its own, without
IgnoreExtraneous
)? We currently append a SHA to our configmap names. The reference change in the Rollout spec triggers a rollout. After the rollout is complete, ArgoCD prunes the old configmap.
How this will work in the case that we abort the rollout? When we abort the rollout, will ArgoCD prune the old configmap because now the sync is complete, right?
I would like to implement it
@tobernguyen have you tried PruneLast (on its own, without
IgnoreExtraneous
)? We currently append a SHA to our configmap names. The reference change in the Rollout spec triggers a rollout. After the rollout is complete, ArgoCD prunes the old configmap.How this will work in the case that we abort the rollout? When we abort the rollout, will ArgoCD prune the old configmap because now the sync is complete, right?
Based on the description of how PruneLast works, I'd venture a guess that the old configmap is still present.
after the other resources have been deployed and become healthy, and after all other waves completed successfully
What I'd like to see is the addition of the revisionHistoryLimit
that only lets x number of configmaps stay around before being pruned.
Similar to
spec:
revisionHistoryLimit: 3
Since 2020, My method for that as i am using always Helm with Argocd:
-
In your Job, calculating the
sha256sum
of the configmap X & compare it with its stored SHA (check 2)..- If there are the same , don't do anything.
- If they are different, trigger whatever you want to trigger based on your need
-
calculating the
sha256sum
of the configmap X & store it in a centralized configmap in PostSync Job with too high synwave
Going to close this in favor of external tooling managing this such as https://github.com/stakater/Reloader as well as other tools like helm being able to do the same thing.