argo-cd icon indicating copy to clipboard operation
argo-cd copied to clipboard

waiting for completion of hook and hook never succeds

Open rajivml opened this issue 3 years ago • 83 comments

HI,

We are seeing this issue quite often where app sync is getting stuck in "waiting for completion of hook" and these hooks are never getting completed

As you can see the below application got stuck on secret creation phase and some how that secret never got created

image

Stripped out all un-necessary details. Now this is how the secret is created and used by the job.

apiVersion: v1
kind: Secret
metadata:
  name: {{ include "xxx.fullname" . }}-migrations-{{ .Chart.AppVersion }}
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
    helm.sh/hook-weight: "-5"
type: Opaque
data:
  xxxx

apiVersion: batch/v1
kind: Job
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "-4"
spec:
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
        - name: app-settings
          configMap:
            name: {{ include "xxx.fullname" . }}-migrations-{{ .Chart.AppVersion }}
        - name: app-secrets
          secret:
            secretName: {{ include "xxx.fullname" . }}-migrations-{{ .Chart.AppVersion }}

kubectl -n argocd logs argocd-server-768f46f469-j98h6 | grep xxx-migrations - No matching logs kubectl -n argocd logs argocd-repo-server-57bdbf899c-9lxhr | grep xxx-migrations - No matching logs kubectl -n argocd logs argocd-repo-server-57bdbf899c-7xvs7 | grep xxx-migrations - No matching logs kubectl -n argocd logs argocd-server-768f46f469-tqp8p | grep xxx-migrations - No matching logs

[testadmin@server0 ~]$ kubectl -n argocd logs argocd-application-controller-0 | grep orchestrator-migrations time="2021-08-02T02:16:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:16:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:19:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:19:26Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:22:17Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:22:17Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:22:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:25:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:25:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:28:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:28:26Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:31:25Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx time="2021-08-02T02:31:26Z" level=info msg="Resuming in-progress operation. phase: Running, message: waiting for completion of hook /Secret/xxx-migrations-0.0.19-private4.1784494" application=xxx

Environment:

  • 3 Node RKE2 Cluster
  • OS: RHEL 8.4
  • K8's setup on Azure VM's

ArgoCD Version: 2.0.1

Please let me know in case of any other info required

rajivml avatar Aug 02 '21 02:08 rajivml

I have terminated the app sync and re-synced it again and the sync is successful now but this can't happen because if it happens CI / CD runs and also the automation that we have done to install apps via argoCD CLI would fail.

rajivml avatar Aug 02 '21 03:08 rajivml

I suspect this is fixed by https://github.com/argoproj/argo-cd/pull/6294 . The fix is available in https://github.com/argoproj/argo-cd/releases/tag/v2.0.3 . Can you try upgrading please?

alexmt avatar Aug 02 '21 20:08 alexmt

sure, thanks, we recently upgraded our develop to to use 2.0.5 and this happened on our prod build which is on 2.0.1. I will see if this can repro on our dev branch. Thanks !

rajivml avatar Aug 03 '21 18:08 rajivml

@alexmt - We are using the below version of ArgoCD and seeing the same issue with Contour helm. Application is waiting for PreSync Job to complete whereas on a cluster I can see the job is completed.

{ "Version": "v2.1.3+d855831", "BuildDate": "2021-09-29T21:51:21Z", "GitCommit": "d855831540e51d8a90b1006d2eb9f49ab1b088af", "GitTreeState": "clean", "GoVersion": "go1.16.5", "Compiler": "gc", "Platform": "linux/amd64", "KsonnetVersion": "v0.13.1", "KustomizeVersion": "v4.2.0 2021-06-30T22:49:26Z", "HelmVersion": "v3.6.0+g7f2df64", "KubectlVersion": "v0.21.0", "JsonnetVersion": "v0.17.0" }

om3171991 avatar Dec 11 '21 12:12 om3171991

I have the same problem in version 2.2.0.

illagrenan avatar Dec 15 '21 10:12 illagrenan

I have the same problem on the 2.3.0 RC1 as well

pseymournutanix avatar Feb 01 '22 13:02 pseymournutanix

The PreSync hook, PostSync hook, and "Syncing" (while No Operation Running) are the only long pending major issues in ArgoCD at the moment.

jaydipdave avatar Feb 17 '22 15:02 jaydipdave

Hello. I am still seeing this in v2.2.4. PreSync hook is scheduled, Job starts, runs to completion, Argo sits there spinning "Progressing" until terminated. To work around it, we are terminating the op and using 'sync --strategy=apply' (disabling the hook) and running our job out of band.

Kube events during the sync confirm the job success. I no longer see the job/pod (per those events) if I check the namespace directly.

LAST SEEN TYPE REASON OBJECT MESSAGE 22m Normal Scheduled pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Successfully assigned dcs-prodemea-ns/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l to gke-service-nap-e2-standard-8-1oj503q-5bf9adda-f9t6 22m Normal Pulling pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Pulling image "gcr.io/dremio-1093/accept-release:v3" 21m Normal Pulled pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Successfully pulled image "gcr.io/dremio-1093/accept-release:v3" in 48.040095979s 21m Normal Created pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Created container dcs-artifact 21m Normal Started pod/dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l Started container dcs-artifact 22m Normal SuccessfulCreate job/dcs-artifact-promoter0ba458b-presync-1645132298 Created pod: dcs-artifact-promoter0ba458b-presync-1645132298-hqm9l

image

Let me know if I can provide any diagnostics to help.

aslamkhan-dremio avatar Feb 18 '22 03:02 aslamkhan-dremio

We face the same issue in 2.2.5 as well.

MariaJohny avatar Feb 23 '22 16:02 MariaJohny

I suspect this is fixed by #6294 . The fix is available in https://github.com/argoproj/argo-cd/releases/tag/v2.0.3 . Can you try upgrading please?

Does it work with 2.0.3 or 2.2.2?

MariaJohny avatar Feb 23 '22 16:02 MariaJohny

I can confirm the error was fixed on 2.0.3. We recently upgraded to 2.3.3 and we are experiencing the error again.

ceguimaraes avatar May 16 '22 07:05 ceguimaraes

We started experiencing this issue after upgrading to 2.3.3. Before that we were on 2.2.3. I am not 100% sure but I do not recall we had any issue with 2.2.3.

yuha0 avatar Jun 01 '22 18:06 yuha0

We're seeing a similar issue on the syncfailed hook which means we can't actually terminate the sync action.

The job doesn't exist in the target namespace, and we've tried to trick argo by creating a job with the same name, namespace, and annotations as we'd expect to see with a simple echo "done' action but nothing is helping.

image

ArgoCD Version;

{"Version":"v2.3.4+ac8b7df","BuildDate":"2022-05-18T11:41:37Z","GitCommit":"ac8b7df9467ffcc0920b826c62c4b603a7bfed24","GitTreeState":"clean","GoVersion":"go1.17.10","Compiler":"gc","Platform":"linux/amd64","KsonnetVersion":"v0.13.1","KustomizeVersion":"v4.4.1 2021-11-11T23:36:27Z","HelmVersion":"v3.8.0+gd141386","KubectlVersion":"v0.23.1","JsonnetVersion":"v0.18.0"}

warmfusion avatar Jun 16 '22 16:06 warmfusion

To add some information here, we are running into the same issue ("waiting for completion of hook" when the hook has already completed), and it happens when we are attempting to sync to a revision that is not the targetRevision for the app. When we sync an app with hooks to the same revision as the targetRevision, we do not run into this.

expand for argo version
{
    "Version": "v2.3.4+ac8b7df",
    "BuildDate": "2022-05-18T11:41:37Z",
    "GitCommit": "ac8b7df9467ffcc0920b826c62c4b603a7bfed24",
    "GitTreeState": "clean",
    "GoVersion": "go1.17.10",
    "Compiler": "gc",
    "Platform": "linux/amd64",
    "KsonnetVersion": "v0.13.1",
    "KustomizeVersion": "v4.4.1 2021-11-11T23:36:27Z",
    "HelmVersion": "v3.8.0+gd141386",
    "KubectlVersion": "v0.23.1",
    "JsonnetVersion": "v0.18.0"
}

We are running 2 application-controller replicas in HA setup as per https://argo-cd.readthedocs.io/en/stable/operator-manual/high_availability/ . I have verified we do not have a leftover instance of argo before it used stateful-sets.

margueritepd avatar Sep 27 '22 19:09 margueritepd

i had a similar problem when i was configuring resource inclusions, i wrote down what happened here: https://github.com/argoproj/argo-cd/issues/10756#issuecomment-1265488657

lacarvalho91 avatar Oct 04 '22 11:10 lacarvalho91

I am still seeing this with 2.5.0-rc1

pseymournutanix avatar Oct 07 '22 08:10 pseymournutanix

We resolved this symptom on v2.4.12+41f54aa for Apps that had many Pods by adding a resource exclusion along these lines to our config map:

data:
  resource.exclusions: |
    - apiGroups:
        - '*'
      kinds:
        - 'Pod'
      clusters:
        - '*'

Prior to this, we would have pre-sync job hooks never completing in the ArgoCD UI, but would have actually be completed in Kubernetes. Sometimes, invalidating the cluster cache would help Argo recognize the job was completed, but most of the time not.

We believe the timeouts were related to needing to enumerate an excessive amount of entities and just simply never could finish before the next status refresh occurred. We do not utilize viewing the status of Pods through ArgoCD UI, so this solution is fine for us. Bonus factor for us is that the UI is much more robust now as well 🙂

cscorley avatar Oct 07 '22 16:10 cscorley

We had this issue and it was relating to a customers Job failing to initialise due to a bad secret mounting. You can validate this by checking the events in the namespace the job is being spun up to see if its failing to create.

DasJayYa avatar Oct 19 '22 04:10 DasJayYa

Hello Argo community :)

I am fairly familiar with ArgoCD codebase and API, and I'd happily try to repay you for building such an awesome project by trying to have a stab at this issue, if there are no objections?

dejanzele avatar Nov 07 '22 16:11 dejanzele

Hello Argo community :)

I am fairly familiar with ArgoCD codebase and API, and I'd happily try to repay you for building such an awesome project by trying to have a stab at this issue, if there are no objections?

I will highly appreciate..!

pritam-acquia avatar Nov 23 '22 15:11 pritam-acquia

I would also highly appreciate that!

williamcodes avatar Nov 23 '22 17:11 williamcodes

I'm seeing this issue with v2.6.1+3f143c9

vumdao avatar Feb 11 '23 10:02 vumdao

I am also seeing this issue when installing kubevela with argocd with the version v2.6.1+3f143c9

    message: >-
      waiting for completion of hook
      /ServiceAccount/kube-vela-vela-core-admission and 3 more hooks

linuxbsdfreak avatar Feb 13 '23 10:02 linuxbsdfreak

We also had this issue and it was resolved once we set ARGOCD_CONTROLLER_REPLICAS.

Instructions here: https://argo-cd.readthedocs.io/en/stable/operator-manual/high_availability/#argocd-application-controller

If the controller is managing too many clusters and uses too much memory then you can shard clusters across multiple controller replicas. To enable sharding increase the number of replicas in argocd-application-controller StatefulSet and repeat number of replicas in ARGOCD_CONTROLLER_REPLICAS environment variable. The strategic merge patch below demonstrates changes required to configure two controller replicas.

micke avatar Feb 13 '23 11:02 micke

I rolled back from 1.6.1 to 1.5.10. Both versions keep waiting for completion of hook, which has already successfully completed.

I also tried @micke's recommendation (changing the ARGOCD_CONTROLLER_REPLICAS from 1 to 3). Doesn't make a difference unfortunately.

boedy avatar Feb 13 '23 11:02 boedy

In my case i am only installing the application on a single cluster. That is the only application that is failing

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kube-vela
  annotations:
    argocd.argoproj.io/sync-wave: "10"
  finalizers:
  - resources-finalizer.argocd.argoproj.io
  namespace: argocd
spec:
  destination:
    namespace: vela-system
    name: in-cluster
  project: default
  source:
    chart: vela-core
    repoURL: https://kubevelacharts.oss-accelerate.aliyuncs.com/core
    targetRevision: 1.7.3
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
     - ApplyOutOfSyncOnly=true
     - CreateNamespace=true
     - PruneLast=true
     - ServerSideApply=true
     - Validate=true
     - Replace=true
    retry:
      limit: 30
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m0s

linuxbsdfreak avatar Feb 13 '23 12:02 linuxbsdfreak

I just figured out what was causing Argo to freeze on the hook. In my case the specific hook had ttlSecondsAfterFinished: 0 defined in the spec. Through Kustomize I removed this field:

patches:
  - target:
      name: pre-hook
      kind: Job
    path: patches/hook.yaml
# patches/hook.yaml
- path: "/spec/ttlSecondsAfterFinished"
  op: remove

Afterwards the chart finally went through! It's still a bug that should be addressed, I'm just sharing this for others to work around it.

boedy avatar Feb 13 '23 16:02 boedy

I had this problem when i had CR which CRD is still not created, and a job with Sync hook

so argocd couldnt apply the custom resource because there was no crd yet, and the hook started and then disappeared. I guess because argo retries to sync the CR and it also restarts the hook somehow (btw I was using SkiDryRunOnMissingResource for the CR)

so I just did that the hook will be PostSync. the CR was retrying until the crd were created, and only after the CR was successfully created then the PostSync hook started and completed successfully

zfrhv avatar Feb 27 '23 12:02 zfrhv

Encountered similar behavior as described in this issue while upgrading from v2.5.12+9cd67b1 to v2.6.3+e05298b pre-upgrade hooks on different applications with various number of pods and jobs had same symptoms. Sync operation is running forever. Had the feeling this random event seems to appear more frequently while using the argo cli.

adlnc avatar Mar 03 '23 14:03 adlnc

Observing the same issue on v2.6.2. Post-Sync hook never completes even though the corresponding pod exited successfully.

chaehni avatar Mar 09 '23 09:03 chaehni