argocd-image-updater
argocd-image-updater copied to clipboard
Image not updated but log shows updated; argocd show parameter override
I see similar open issues where argocd updater log said it has updated an image but pod itself doesn't roll out new one: https://github.com/argoproj-labs/argocd-image-updater/issues/186 https://github.com/argoproj-labs/argocd-image-updater/issues/431
Using argocd-image-updater with a static tag, log said it updated an image when a new hash is pushed but in argo-cd it shows as parameter override but pod itself doesn't update. What is odd is that I am able to get exact config working on two different cluster with same argocd server version (v2.3.4+ac8b7df).
Edit: difference between them is that the one cluster having issue is running kubernetes 1.22 and other running 1.21.
My image registry in question is ECR. In argocd I see parameters of image show new sha tag but pod doesn't update:
new image shows up as parameter override(s)
Otherwise no error that I can see..
Version
argocd: v2.3.4+ac8b7df
argo-image-updater: v0.12.0 (als tested v0.11.3)
image-updater logs
time="2022-05-23T19:43:32Z" level=trace msg="Found date 2022-05-20 18:45:41.686866901 +0000 UTC" alias=extension application=extension image_name=*****/extensions-server image_tag=prod registry=*registry*
time="2022-05-23T19:43:32Z" level=trace msg="released semaphore and terminated waitgroup"
time="2022-05-23T19:43:32Z" level=trace msg="List of available tags found: [prod]" alias=extension application=extension image_name=*****/extensions-server image_tag=prod registry=*registry*
time="2022-05-23T19:43:32Z" level=trace msg="Finding out whether to consider prod for being updateable" image="*registry*/*****/extensions-server:prod"
time="2022-05-23T19:43:32Z" level=debug msg="found 1 from 1 tags eligible for consideration" image="*registry*/*****/extensions-server:prod"
time="2022-05-23T19:43:32Z" level=trace msg="Setting dummy digest for image *registry*/*****/extensions-server:prod"
time="2022-05-23T19:43:32Z" level=info msg="Setting new image to *registry*/*****/extensions-server@sha256:ddf07c2ef51f297001b9faed3996eb00b609c2624c94f9c78ef7bae84cc9f883" alias=extension application=extension image_name=*****/extensions-server image_tag=dummy registry=*registry*
time="2022-05-23T19:43:32Z" level=trace msg="Setting Kustomize parameter *registry*/*****/extensions-server@sha256:ddf07c2ef51f297001b9faed3996eb00b609c2624c94f9c78ef7bae84cc9f883" application=extension
time="2022-05-23T19:43:32Z" level=info msg="Successfully updated image '*registry*/*****/extensions-server@dummy' to '*registry*/*****/extensions-server@sha256:ddf07c2ef51f297001b9faed3996eb00b609c2624c94f9c78ef7bae84cc9f883', but pending spec update (dry run=false)" alias=extension application=extension image_name=*****/extensions-server image_tag=dummy registry=*registry*
kubectl describe application
Name: extension
Namespace: argocd-*****
Labels: <none>
Annotations: argocd-image-updater.argoproj.io/extension.allow-tags: any
argocd-image-updater.argoproj.io/extension.update-strategy: digest
argocd-image-updater.argoproj.io/image-list: extension=*****.dkr.ecr.us-west-2.amazonaws.com/*****/extensions-server:prod
argocd-image-updater.argoproj.io/write-back-method: argocd
argocd.argoproj.io/sync-options: PruneLast=true
notifications.argoproj.io/subscribe.on-sync-succeeded.slack: argocd
API Version: argoproj.io/v1alpha1
Kind: Application
Metadata:
Creation Timestamp: 2022-05-20T21:16:35Z
Generation: 1421
Managed Fields:
API Version: argoproj.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:spec:
f:source:
f:kustomize:
.:
f:images:
Manager: argocd-image-updater
Operation: Update
Time: 2022-05-20T21:17:52Z
API Version: argoproj.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:argocd-image-updater.argoproj.io/extension.allow-tags:
f:argocd-image-updater.argoproj.io/extension.update-strategy:
f:argocd-image-updater.argoproj.io/image-list:
f:argocd-image-updater.argoproj.io/write-back-method:
f:argocd.argoproj.io/sync-options:
f:kubectl.kubernetes.io/last-applied-configuration:
f:notifications.argoproj.io/subscribe.on-sync-succeeded.slack:
f:spec:
.:
f:destination:
.:
f:namespace:
f:server:
f:project:
f:source:
.:
f:path:
f:repoURL:
f:targetRevision:
f:syncPolicy:
.:
f:automated:
.:
f:prune:
f:selfHeal:
f:syncOptions:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-05-23T17:43:54Z
API Version: argoproj.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:health:
.:
f:status:
f:reconciledAt:
f:resources:
f:sourceType:
f:summary:
.:
f:images:
f:sync:
.:
f:comparedTo:
.:
f:destination:
.:
f:namespace:
f:server:
f:source:
.:
f:kustomize:
.:
f:images:
f:path:
f:repoURL:
f:targetRevision:
f:revision:
f:status:
Manager: Go-http-client
Operation: Update
Time: 2022-05-23T19:43:02Z
Resource Version: 19566635
UID: c281acdf-306b-46dd-8e1f-8848be1126e4
Spec:
Destination:
Namespace: *****-prod
Server: https://kubernetes.default.svc
Project: *****
Source:
Kustomize:
Images:
*****.dkr.ecr.us-west-2.amazonaws.com/*****/extensions-server@sha256:ddf07c2ef51f297001b9faed3996eb00b609c2624c94f9c78ef7bae84cc9f883
Path: env/prod/*****/deploy/*****/extension
Repo URL: https://github.com/*****/infra-eks-cluster
Target Revision: HEAD
Sync Policy:
Automated:
Prune: true
Self Heal: true
Sync Options:
CreateNamespace=true
Status:
Health:
Status: Healthy
Reconciled At: 2022-05-23T19:57:25Z
Resources:
Health:
Status: Healthy
Kind: Service
Name: extensions-server
Namespace: *****-prod
Status: Synced
Version: v1
Group: apps
Health:
Status: Healthy
Kind: Deployment
Name: extensions-server
Namespace: *****-prod
Status: Synced
Version: v1
Source Type: Kustomize
Summary:
Images:
*****.dkr.ecr.us-west-2.amazonaws.com/*****/extensions-server:prod
Sync:
Compared To:
Destination:
Namespace: *****-prod
Server: https://kubernetes.default.svc
Source:
Kustomize:
Images:
*****.dkr.ecr.us-west-2.amazonaws.com/*****/extensions-server@sha256:ddf07c2ef51f297001b9faed3996eb00b609c2624c94f9c78ef7bae84cc9f883
Path: env/prod/*****/deploy/*****/extension
Repo URL: https://github.com/*****/infra-eks-cluster
Target Revision: HEAD
Revision: 8e8cd28b8a611cd92736035b6cebaeaa3ba48f4d
Status: Synced
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ImagesUpdated 10m ArgocdImageUpdater Successfully updated application 'extension'
Normal ImagesUpdated 8m48s ArgocdImageUpdater Successfully updated application 'extension'
Normal ImagesUpdated 6m47s ArgocdImageUpdater Successfully updated application 'extension'
Normal ImagesUpdated 4m47s ArgocdImageUpdater Successfully updated application 'extension'
Normal ImagesUpdated 2m47s ArgocdImageUpdater Successfully updated application 'extension'
Normal ImagesUpdated 46s ArgocdImageUpdater Successfully updated application 'extension'
I also got the same problem but after debugging the issue, I found ARGOCD sync policy is responsible for this. After I updated the sync policy then it just worked
syncPolicy: automated: prune: true selfHeal: true
I have the same issue. Some applications have their pods updates, some applications don't. They all have the same annotation, update-strategy: digest
Kubernetes 1.24 ArgoCD v2.3.4+ac8b7df Image Updater v0.12.0
I also got the same problem but after debugging the issue, I found ARGOCD sync policy is responsible for this. After I updated the sync policy then it just worked
syncPolicy: automated: prune: true selfHeal: true
I have all of these set and it's made no difference. If I find an answer I'll update
Good Morning guys,
couple days ago I've experienced the same issue as mentioned in this forum. I had checked logs in argocd image updater pod, and I found out some messages that argocd has successfully update image in related app. however it turned out was wrong.
and here the things I've done to make it right :
- Delete all kubernetes resources owned by argocd namespace
- Delete namespace of argocd
- Reproduce argocd by creating it using this command :
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
(Note : please make sure you have created the namespace of argocd earlier) - Since the default argocd service is "ClusterIP", I decided to expose them using ingress. You can create your own ingress for your argocd accordingly.
- Edit configmap of "argocd-cm". add these following lines :
data:
resource.customizations: |
networking.k8s.io/Ingress:
health.lua: |
hs = {}
hs.status = "Healthy"
return hs
- Restart argocd deployment :
kubectl rollout restart deploy -n argocd argocd-server
- Now login to your registry account through CLI (in my case, I am using docker hub registry) :
docker login
- Create secret to store registry credential :
kubectl -n argocd create secret generic <secret_name> --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson
- Modify configmap of "
argocd-image-updater-config
". add these following lines :
data:
log.level: debug
registries.conf: |
registries:
- name: Docker Hub
prefix: docker.io
api_url: https://index.docker.io/
ping: yes
## this is detail of secret we've created before ##
## format = pullsecret:<namespace>/<secret_name> ##
credentials: pullsecret:argocd/dockerhub-johndoe
- I am not pretty sure whether it's mandatory to do. But I suggest you to do it. kill or purge pod of argocd-image-updater everytime you make changes in its configmap.
- Access your argocd web UI using its default password and create your own app that has been created in GitHub/ GitLab earlier. (Do not enable the auto sync feature)
- Modify your app through CLI. Add these lines :
annotations:
argocd-image-updater.argoproj.io/git-branch: main
argocd-image-updater.argoproj.io/image-list: alias=docker.io/<dockerhub-account-name>/<repository-name>
argocd-image-updater.argoproj.io/alias.force-update: "true"
argocd-image-updater.argoproj.io/alias.update-strategy: latest
argocd-image-updater.argoproj.io/write-back-method: git:secret:argocd/<Git-access-token-name>
- Enable auto sync of argocd App through argocd web UI
- Wait for several minutes to take effect on it.
These are couple things I've done with my argocd within my kubernetes cluster. And I prove it on my own it works. Please do not hesitate if you guys have other questions or suggestions.
Hopefully this brief workaround could help you who experienced the same issue.
hi there, i think i might find workaround for fixing this problem,
condition: log showing "Successfully updated the live application spec" and argocd didnt update deployment.
- what i did is disable syncronization,
automated:
selfHeal: true
prune: true
to
automated:
selfHeal: false
prune: false
- syncronize app.yaml manual, you can do it from webgui
- and then change back from
false
totrue
at step number 1
Found a potential cause of this issue in some rare cases.
It appears that the implementation of https://github.com/argoproj/argo-cd/pull/5038 means that when a argocd-sources-*yaml
file exists, its contents will override any imperative changes (write-back-method: argocd
) made by argocd-image-updater.
Basically, the Helm Parameters in the Application spec have a lower precedence than the .argocd-sources*yaml
file parameters which are merged over the top of them.
This caused issues for our intended implementation which was to use the git
write-back-method for prod and use the argocd
write-back-method for dev. However, since both dev and prod point to the same branch, the prod version was always taking precedence as it was set in the argocd-sources-*yaml
file even though the Application had source.helm.parameters[]
set correctly.
Found a potential cause of this issue in some rare cases.
It appears that the implementation of argoproj/argo-cd#5038 means that when a
argocd-sources-*yaml
file exists, its contents will override any imperative changes (write-back-method: argocd
) made by argocd-image-updater.Basically, the Helm Parameters in the Application spec have a lower precedence than the
.argocd-sources*yaml
file parameters which are merged over the top of them.This caused issues for our intended implementation which was to use the
git
write-back-method for prod and use theargocd
write-back-method for dev. However, since both dev and prod point to the same branch, the prod version was always taking precedence as it was set in theargocd-sources-*yaml
file even though the Application hadsource.helm.parameters[]
set correctly.
That was the problem. I was seeking this for two days. Thank you so much. Before, the sync method was git (argocd-image-updater.argoproj.io/write-back-method: git) and I was trying to revert back to argocd but failed and couldn't figure out why image updater saying "Successfully updated image but pending spec update". The reason was these files in the git repo.
Still got this issue, even though I use git write-back for all stages which works correctly. However, the .argocd-source*.yaml file seems to be ignored when syncing the app.
I made sure my app is refreshed to the git-write-back-commit containing the most up to date .argocd-source.yaml, but argocd just doesn't recognize the changes.
Kubernetes 1.26.3 ArgoCD v2.7.3+e7891b8.dirty Image Updater v0.12.2
Still got this issue, even though I use git write-back for all stages which works correctly. However, the .argocd-source*.yaml file seems to be ignored when syncing the app.
I made sure my app is refreshed to the git-write-back-commit containing the most up to date .argocd-source.yaml, but argocd just doesn't recognize the changes.
Kubernetes 1.26.3 ArgoCD v2.7.3+e7891b8.dirty Image Updater v0.12.2
Did you ever solve this issue? Running into the exact same thing.
Seeing the same issue. .argocd-source files just get ignored no matter what I try.
Hi I have the same issue.has anyone tried other write-back method to see if it's working?
I've been using "image" parameter in my helm chart which contains registry/repo/imagename:tag.I changed it to image.name and image.tag which the image.name is registry/repo/imagename and image.tag is tag. It is working fine now.I added below annotations to Application to work properly: argocd-image-updater.argoproj.io/write-back-method: argocd argocd-image-updater.argoproj.io/image-list: helm-chart=regaddress/helm-test/nginx:1.x argocd-image-updater.argoproj.io/helm-chart.update-strategy: semver argocd-image-updater.argoproj.io/helm-chart.helm.image-name: helm-chart.image.name argocd-image-updater.argoproj.io/helm-chart.helm.image-tag: helm-chart.image.tag
I had similar issue, I was using ImageTagTransformer and image updater. When I removed newName property from transformer, and used the same image name in deployment and image tag transformer (keeping newTag property), problem was resolved.