argo-cd
argo-cd copied to clipboard
waiting for completion of hook and hook never succeds
HI,
We are seeing this issue quite often where app sync is getting stuck in "waiting for completion of hook" and these hooks are never getting completed
As you can see the below application got stuck on secret creation phase and some how that secret never got created
Stripped out all un-necessary details. Now this is how the secret is created and used by the job.
apiVersion: v1
kind: Secret
metadata:
name: {{ include "xxx.fullname" . }}-migrations-{{ .Chart.AppVersion }}
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
helm.sh/hook-weight: "-5"
type: Opaque
data:
xxxx
apiVersion: batch/v1
kind: Job
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
helm.sh/hook-weight: "-4"
spec:
annotations:
sidecar.istio.io/inject: "false"
spec:
- name: app-settings
configMap:
name: {{ include "xxx.fullname" . }}-migrations-{{ .Chart.AppVersion }}
- name: app-secrets
secret:
secretName: {{ include "xxx.fullname" . }}-migrations-{{ .Chart.AppVersion }}
kubectl -n argocd logs argocd-server-768f46f469-j98h6 | grep xxx-migrations - No matching logs kubectl -n argocd logs argocd-repo-server-57bdbf899c-9lxhr | grep xxx-migrations - No matching logs kubectl -n argocd logs argocd-repo-server-57bdbf899c-7xvs7 | grep xxx-migrations - No matching logs kubectl -n argocd logs argocd-server-768f46f469-tqp8p | grep xxx-migrations - No matching logs
[testadmin@server0 ~]$ kubectl -n argocd logs argocd-application-controller-0 | grep orchestrator-migrations time="2021-08-02T02:16:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:16:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:19:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:19:26Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:22:17Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:22:17Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:22:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:25:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:25:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:28:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:28:26Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:31:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:31:26Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx
Environment:
- 3 Node RKE2 Cluster
- OS: RHEL 8.4
- K8's setup on Azure VM's
ArgoCD Version: 2.0.1
Please let me know in case of any other info required
I have terminated the app sync and re-synced it again and the sync is successful now but this can't happen because if it happens CI / CD runs and also the automation that we have done to install apps via argoCD CLI would fail.
I suspect this is fixed by https://github.com/argoproj/argo-cd/pull/6294 . The fix is available in https://github.com/argoproj/argo-cd/releases/tag/v2.0.3 . Can you try upgrading please?
sure, thanks, we recently upgraded our develop to to use 2.0.5 and this happened on our prod build which is on 2.0.1. I will see if this can repro on our dev branch. Thanks !
@alexmt - We are using the below version of ArgoCD and seeing the same issue with Contour helm. Application is waiting for PreSync Job to complete whereas on a cluster I can see the job is completed.
{ "Version": "v2.1.3+d855831", "BuildDate": "2021-09-29T21:51:21Z", "GitCommit": "d855831540e51d8a90b1006d2eb9f49ab1b088af", "GitTreeState": "clean", "GoVersion": "go1.16.5", "Compiler": "gc", "Platform": "linux/amd64", "KsonnetVersion": "v0.13.1", "KustomizeVersion": "v4.2.0 2021-06-30T22:49:26Z", "HelmVersion": "v3.6.0+g7f2df64", "KubectlVersion": "v0.21.0", "JsonnetVersion": "v0.17.0" }
I have the same problem in version 2.2.0
.
I have the same problem on the 2.3.0 RC1 as well
The PreSync hook, PostSync hook, and "Syncing" (while No Operation Running) are the only long pending major issues in ArgoCD at the moment.
Hello. I am still seeing this in v2.2.4. PreSync hook is scheduled, Job starts, runs to completion, Argo sits there spinning "Progressing" until terminated. To work around it, we are terminating the op and using 'sync --strategy=apply' (disabling the hook) and running our job out of band.
Kube events during the sync confirm the job success. I no longer see the job/pod (per those events) if I check the namespace directly.
LAST SEEN TYPE REASON OBJECT MESSAGE 22m Normal Scheduled pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Successfully assigned dcs-prodemea-ns/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l to gke-service-nap-e2-standard-8-1oj503q-5bf9adda-f9t6 22m Normal Pulling pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Pulling image "gcr.io/dremio-1093/accept-release:v3" 21m Normal Pulled pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Successfully pulled image "gcr.io/dremio-1093/accept-release:v3" in 48.040095979s 21m Normal Created pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Created container dcs-artifact 21m Normal Started pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Started container dcs-artifact 22m Normal SuccessfulCreate job/dcs-artifact-promoter0ba458b-presync-1645132298 Created pod: dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l
Let me know if I can provide any diagnostics to help.
We face the same issue in 2.2.5 as well.
I suspect this is fixed by #6294 . The fix is available in https://github.com/argoproj/argo-cd/releases/tag/v2.0.3 . Can you try upgrading please?
Does it work with 2.0.3 or 2.2.2?
I can confirm the error was fixed on 2.0.3. We recently upgraded to 2.3.3 and we are experiencing the error again.
We started experiencing this issue after upgrading to 2.3.3
. Before that we were on 2.2.3
. I am not 100% sure but I do not recall we had any issue with 2.2.3
.
We're seeing a similar issue on the syncfailed hook which means we can't actually terminate the sync action.
The job doesn't exist in the target namespace, and we've tried to trick argo by creating a job with the same name, namespace, and annotations as we'd expect to see with a simple echo "done' action but nothing is helping.
ArgoCD Version;
{"Version":"v2.3.4+ac8b7df","BuildDate":"2022-05-18T11:41:37Z","GitCommit":"ac8b7df9467ffcc0920b826c62c4b603a7bfed24","GitTreeState":"clean","GoVersion":"go1.17.10","Compiler":"gc","Platform":"linux/amd64","KsonnetVersion":"v0.13.1","KustomizeVersion":"v4.4.1 2021-11-11T23:36:27Z","HelmVersion":"v3.8.0+gd141386","KubectlVersion":"v0.23.1","JsonnetVersion":"v0.18.0"}
To add some information here, we are running into the same issue ("waiting for completion of hook" when the hook has already completed), and it happens when we are attempting to sync to a revision that is not the targetRevision for the app. When we sync an app with hooks to the same revision as the targetRevision, we do not run into this.
expand for argo version
{
"Version": "v2.3.4+ac8b7df",
"BuildDate": "2022-05-18T11:41:37Z",
"GitCommit": "ac8b7df9467ffcc0920b826c62c4b603a7bfed24",
"GitTreeState": "clean",
"GoVersion": "go1.17.10",
"Compiler": "gc",
"Platform": "linux/amd64",
"KsonnetVersion": "v0.13.1",
"KustomizeVersion": "v4.4.1 2021-11-11T23:36:27Z",
"HelmVersion": "v3.8.0+gd141386",
"KubectlVersion": "v0.23.1",
"JsonnetVersion": "v0.18.0"
}
We are running 2 application-controller replicas in HA setup as per https://argo-cd.readthedocs.io/en/stable/operator-manual/high_availability/ . I have verified we do not have a leftover instance of argo before it used stateful-sets.
i had a similar problem when i was configuring resource inclusions, i wrote down what happened here: https://github.com/argoproj/argo-cd/issues/10756#issuecomment-1265488657
I am still seeing this with 2.5.0-rc1
We resolved this symptom on v2.4.12+41f54aa for Apps that had many Pods by adding a resource exclusion along these lines to our config map:
data:
resource.exclusions: |
- apiGroups:
- '*'
kinds:
- 'Pod'
clusters:
- '*'
Prior to this, we would have pre-sync job hooks never completing in the ArgoCD UI, but would have actually be completed in Kubernetes. Sometimes, invalidating the cluster cache would help Argo recognize the job was completed, but most of the time not.
We believe the timeouts were related to needing to enumerate an excessive amount of entities and just simply never could finish before the next status refresh occurred. We do not utilize viewing the status of Pods through ArgoCD UI, so this solution is fine for us. Bonus factor for us is that the UI is much more robust now as well 🙂
We had this issue and it was relating to a customers Job failing to initialise due to a bad secret mounting. You can validate this by checking the events in the namespace the job is being spun up to see if its failing to create.
Hello Argo community :)
I am fairly familiar with ArgoCD codebase and API, and I'd happily try to repay you for building such an awesome project by trying to have a stab at this issue, if there are no objections?
Hello Argo community :)
I am fairly familiar with ArgoCD codebase and API, and I'd happily try to repay you for building such an awesome project by trying to have a stab at this issue, if there are no objections?
I will highly appreciate..!
I would also highly appreciate that!
I'm seeing this issue with v2.6.1+3f143c9
I am also seeing this issue when installing kubevela with argocd with the version v2.6.1+3f143c9
message: >-
waiting for completion of hook
/ServiceAccount/kube-vela-vela-core-admission and 3 more hooks
We also had this issue and it was resolved once we set ARGOCD_CONTROLLER_REPLICAS
.
Instructions here: https://argo-cd.readthedocs.io/en/stable/operator-manual/high_availability/#argocd-application-controller
If the controller is managing too many clusters and uses too much memory then you can shard clusters across multiple controller replicas. To enable sharding increase the number of replicas in argocd-application-controller StatefulSet and repeat number of replicas in ARGOCD_CONTROLLER_REPLICAS environment variable. The strategic merge patch below demonstrates changes required to configure two controller replicas.
I rolled back from 1.6.1
to 1.5.10
. Both versions keep waiting for completion of hook, which has already successfully completed.
I also tried @micke's recommendation (changing the ARGOCD_CONTROLLER_REPLICAS
from 1 to 3). Doesn't make a difference unfortunately.
In my case i am only installing the application on a single cluster. That is the only application that is failing
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-vela
annotations:
argocd.argoproj.io/sync-wave: "10"
finalizers:
- resources-finalizer.argocd.argoproj.io
namespace: argocd
spec:
destination:
namespace: vela-system
name: in-cluster
project: default
source:
chart: vela-core
repoURL: https://kubevelacharts.oss-accelerate.aliyuncs.com/core
targetRevision: 1.7.3
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- ApplyOutOfSyncOnly=true
- CreateNamespace=true
- PruneLast=true
- ServerSideApply=true
- Validate=true
- Replace=true
retry:
limit: 30
backoff:
duration: 5s
factor: 2
maxDuration: 3m0s
I just figured out what was causing Argo to freeze on the hook. In my case the specific hook had ttlSecondsAfterFinished: 0
defined in the spec. Through Kustomize I removed this field:
patches:
- target:
name: pre-hook
kind: Job
path: patches/hook.yaml
# patches/hook.yaml
- path: "/spec/ttlSecondsAfterFinished"
op: remove
Afterwards the chart finally went through! It's still a bug that should be addressed, I'm just sharing this for others to work around it.
I had this problem when i had CR which CRD is still not created, and a job with Sync hook
so argocd couldnt apply the custom resource because there was no crd yet, and the hook started and then disappeared. I guess because argo retries to sync the CR and it also restarts the hook somehow (btw I was using SkiDryRunOnMissingResource for the CR)
so I just did that the hook will be PostSync. the CR was retrying until the crd were created, and only after the CR was successfully created then the PostSync hook started and completed successfully
Encountered similar behavior as described in this issue while upgrading from v2.5.12+9cd67b1 to v2.6.3+e05298b pre-upgrade hooks on different applications with various number of pods and jobs had same symptoms. Sync operation is running forever. Had the feeling this random event seems to appear more frequently while using the argo cli.
Observing the same issue on v2.6.2. Post-Sync hook never completes even though the corresponding pod exited successfully.