argo-cd
argo-cd copied to clipboard
ArgoCD reports application OutOfSync, CLI shows no diff, UI diff not correct, unable to converge
Checklist:
- [x] I've searched in the docs and FAQ for my answer: http://bit.ly/argocd-faq.
- [x] I've included steps to reproduce the bug.
- [x] I've pasted the output of
argocd version
.
Describe the bug
Our manifests by default have empty metadata annotations (see https://gitlab.cloudferro.com/kklimonda/argocd-bug-report.git for the code example). When initially deployed on the cluster, apps are reported as Synced/Healthy
but once annotation is modified out-of-band (e.g. via kubectl rollout restart
status changes to OutOfSync/Healthy
and cannot be converted to Synced/Healthy
via sync anymore.
Furthermore the manifest diff is wrong: CLI shows no diff at all, and WebUI diff doesn't match the manifest (see screenshots).
To Reproduce
$ kubectl create namespace argocd-bug-report
$ argocd app create argocd-bug-report --repo https://gitlab.cloudferro.com/kklimonda/argocd-bug-report.git --path env/staging/ --dest-server https://kubernetes.default.svc --dest-namespace argocd-bug-report
$ argocd app sync argocd-bug-report
[...]
$ argocd app list | grep argocd-bug-report
argocd-bug-report https://kubernetes.default.svc argocd-bug-report default Synced Healthy <none> <none> https://gitlab.cloudferro.com/kklimonda/argocd-bug-report.git env/staging/
$ kubectl -n argocd-bug-report rollout restart deployment nginx
deployment.extensions/nginx restarted
$ argocd app list | grep argocd-bug-report
argocd-bug-report https://kubernetes.default.svc argocd-bug-report default OutOfSync Healthy <none> <none> https://gitlab.cloudferro.com/kklimonda/argocd-bug-report.git env/staging/
$ argocd app diff argocd-bug-report
$ $ argocd app sync argocd-bug-report
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
2020-02-24T08:41:38+00:00 apps Deployment argocd-bug-report nginx OutOfSync Healthy
Name: argocd-bug-report
Project: default
Server: https://kubernetes.default.svc
Namespace: argocd-bug-report
URL: https://argocd.apps.sydney.cloudferro.com/applications/argocd-bug-report
Repo: https://gitlab.cloudferro.com/kklimonda/argocd-bug-report.git
Target:
Path: env/staging/
SyncWindow: Sync Allowed
Sync Policy: <none>
Sync Status: OutOfSync from (792fea0)
Health Status: Healthy
Operation: Sync
Sync Revision: 792fea0287adcce1a9345f870243f528e2879b41
Phase: Succeeded
Start: 2020-02-24 08:41:38 +0000 UTC
Finished: 2020-02-24 08:41:38 +0000 UTC
Duration: 0s
Message: successfully synced (all tasks run)
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
apps Deployment argocd-bug-report nginx OutOfSync Healthy deployment.apps/nginx configured
$
Expected behavior
Firstly, given that kubectl.kubernetes.io/restartedAt
can be added by ArgoCD itself, perhaps that particular annotation should be ignored by default?
Secondly, I'd expect sync to drop that annotation to converge application to Synced/Healthy
state.
Screenshots
Version
$ argocd version
argocd: v1.4.2+48cced9
BuildDate: 2020-01-24T01:04:04Z
GitCommit: 48cced9d925b5bc94f6aa9fa4a8a19b2a59e128a
GitTreeState: clean
GoVersion: go1.12.6
Compiler: gc
Platform: linux/amd64
argocd-server: v1.4.1+f8721a7
BuildDate: 2020-01-22T22:59:33Z
GitCommit: f8721a73609611dd481886a78f4b7ce16ef8747b
GitTreeState: clean
GoVersion: go1.12.6
Compiler: gc
Platform: linux/amd64
Ksonnet Version: v0.13.1
Kustomize Version: Version: {Version:kustomize/v3.2.1 GitCommit:d89b448c745937f0cf1936162f26a5aac688f840 BuildDate:2019-09-27T00:10:52Z GoOs:linux GoArch:amd64}
Helm Version: v2.15.2
Kubectl Version: v1.14.0
$
Logs
Paste any relevant application logs here.
@kklimonda same here in the end i added in my deploymet/statefullset
{{- if .Values.annotations }} annotations: {{ toYaml .Values.annotations | indent 4 }} {{- end }}
to please argocd i.e workaround
Is there any option to disable such verification mechanism for all resources in argocd?
Using 2.2.x and now 2.3.2 several of my applications are showing as out-of-sync because of the restart annotation.
This seems to affect all my deployments that have an empty annotations in the desired manifest
template:
metadata:
annotations: {}
producing a diff like
I was able to solve it by deleting and re-creating the argo application with all it's resources
I was able to solve it by deleting and re-creating the argo application with all it's resources
you mean ... including the deployments, pods, services etc? Or just the argocd application resource? I don't want to cause downtime, just to remove some diff
In the meantime I tried adding a ignore spec
ignoreDifferences:
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt"
which ended up looking like this
Removing the annotations: {}
from desired manifest, with and without the above ignore spec had no impact.
The only way I found to get rid of it, without causing downtime of my deployment, is to manually edit the live manifest
This worked for me in argocd-cm config map.
data:
resource.customizations.ignoreDifferences.apps_Deployment: |
jqPathExpressions:
- .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt"
- if (.spec.template.metadata.annotations | length) == 0 then .spec.template.metadata.annotations else empty end
See here how patching works. It's done in a loop with del(<your jq path expression here>)
.
We do seemingly have the same issue. Any workarounds other than deleting the resource ? How the resource gets into this state ?
this annotation added by kubectl - objectrestarter.go
It looks like a bug for me, because I could manually remove this annotation from Deployment / DaemonSet, although it causes another restart.
@mnacharov May I know if there will be a PR for this? Thanks very much.
this doesn't seem to work for me. Its still showing a diff on the annotation:
I'm frequently getting this same restartedAt issue. Anyone manage to fix it reliably?
Removing empty annotations: {}
fixed it :+1:
Argocd v2.7.8+92949f6.dirty
here.
Doesn't matter if the empty annotations: {}
is present or not, the app stays out of sync with the following diff:
@ragnarpa's solution is working for me, with one modification: using keys | length
on the second expression. Otherwise it seems to be ignoring all annotations, I imagine because annotations
is not a list and length is always returning zero?
I'm using the following configuration:
resource.customizations.ignoreDifferences.all: |
jqPathExpressions:
- .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt"
- if (.spec.template.metadata.annotations | keys | length) == 0 then .spec.template.metadata.annotations else empty end
@ragnarpa's solution is working for me, with one modification: using
keys | length
on the second expression. Otherwise it seems to be ignoring all annotations, I imagine becauseannotations
is not a list and length is always returning zero?I'm using the following configuration:
resource.customizations.ignoreDifferences.all: | jqPathExpressions: - .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt" - if (.spec.template.metadata.annotations | keys | length) == 0 then .spec.template.metadata.annotations else empty end
both solutions seems to give the same result: https://jqlang.github.io/jq/manual/#length