argo-workflows icon indicating copy to clipboard operation
argo-workflows copied to clipboard

jqFilter - no such fi le or directory

Open milank78git opened this issue 2 years ago • 1 comments

Pre-requisites

  • [X] I have double-checked my configuration
  • [X] I can confirm the issues exists when I tested with :latest
  • [ ] I'd like to contribute the fix myself (see contributing guide)

What happened/what you expected to happen?

I have a simple script where I want to print the output using jqFilter.

No matter how I fill it in, it still returns " no such file or directory"

level=error msg="executor error: fork/exec kubectl get pod./argo-server-b944b94d6-mrxnt -n argo -o json | jq -rc'.': no such file or directory"

Version

v3.4.1

Paste a small workflow that reproduces the issue. We must be able to run the workflow; don't enter a workflows that uses private images.

metadata:
  name: argocd-apps-filter
  namespace: argo
  labels:
    example: 'true'
    workflows.argoproj.io/creator: system-serviceaccount-argo-argo-server
  managedFields:
    - manager: kubectl-client-side-apply
      operation: Update
      apiVersion: argoproj.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:example: {}
    - manager: argo
      operation: Update
      apiVersion: argoproj.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            f:workflows.argoproj.io/creator: {}
        f:spec: {}
spec:
  templates:
    - name: main
      inputs: {}
      outputs: {}
      metadata: {}
      dag:
        tasks:
          - name: status
            template: status
            arguments: {}
    - name: status
      inputs: {}
      outputs:
        parameters:
          - name: job-name
            valueFrom:
              jsonPath: '{.metadata.name}'
          - name: job-obj
            valueFrom:
              jqFilter: .
      metadata: {}
      resource:
        action: get
        manifest: |
          apiVersion: v1
          kind: Pod
          metadata:
            name: argo-server-b944b94d6-mrxnt
            namespace: argo
  entrypoint: main
  arguments: {}
  serviceAccountName: default
  ttlStrategy:
    secondsAfterCompletion: 3000
  podGC:
    strategy: OnPodCompletion
  workflowMetadata:
    labels:
      example: 'true'

Logs from the workflow controller

kubectl logs -n argo deploy/workflow-controller | grep ${workflow} level=info msg="capturing logs" argo=true level=info msg="Starting Workflow Executor" version=v3.4.1 level=info msg="Using executor retry strategy" Duration=1s Factor=1.6 Jitter=0.5 Steps=5 level=info msg="Executor initialized" deadline="0001-01-01 00:00:00 +0000 UTC" includeScriptOutput=false namespace=argo podName=argocd-apps-xg9cx-status-1464683396 template="{"name":"status","inputs":{},"outputs":{"parameters":[{"name":"job-name3","valueFrom":{"jsonPath":"{.metadata.name}"}},{"name":"job-obj","valueFrom":{"jqFilter":"."}}]},"metadata":{},"resource":{"action":"get","manifest":"apiVersion: v1\nkind: Pod\nmetadata :\n name: argo-server-b944b94d6-mrxnt\n namespace: argo\n"},"archiveLocation":{"archiveLogs":true,"s3":{"endpoint":"minio:9000","bucket":"my-bucket","insecure" :true,"accessKeySecret":{"name":"my-minio-cred","key":"accesskey"},"secretKeySecret":{"name":"my-minio-cred","key":"secretkey"},"key":"argocd-apps-xg9cx/argoc d-apps-xg9cx-status-1464683396"}}}" version="&Version{Version:v3.4.1,BuildDate:2022-10-01T15:03:42Z,GitCommit:0546fef0b096d84c9e3362d2b241614e743ebe97,GitTag:v3.4.1,GitTreeState:cl ean,GoVersion:go1.18.6,Compiler:gc,Platform:linux/amd64,}" level=info msg="Loading manifest to /tmp/manifest.yaml" level=info msg="kubectl get -f /tmp/manifest.yaml -o json" level=info msg="Resource: argo/pod./argo-server-b944b94d6-mrxnt. SelfLink: api/v1/namespaces/argo/pods/argo-server-b944b94d6-mrxnt" level=info msg="Saving resource output parameters" level=info msg="[kubectl get pod./argo-server-b944b94d6-mrxnt -o jsonpath={.metadata.name} -n argo]" level=info msg="Saved output parameter: job-name3, value: argo-server-b944b94d6-mrxnt" level=info msg="[kubectl get pod./argo-server-b944b94d6-mrxnt -n argo -o json | jq -rc '.']" level=error msg="executor error: fork/exec kubectl get pod./argo-server-b944b94d6-mrxnt -n argo -o json | jq -rc'.': no such file or directory" level=info msg="sub-process exited" argo=true error="" Error: exit status 1

Logs from in your workflow's wait container

kubectl logs -n argo -c wait -l workflows.argoproj.io/workflow=${workflow},workflow.argoproj.io/phase!=Succeeded

milank78git avatar Oct 19 '22 05:10 milank78git

@rohankmr414 Could you take a look? Seems related to the recent change.

terrytangyuan avatar Oct 19 '22 12:10 terrytangyuan

I looks like this was caused by

https://github.com/argoproj/argo-workflows/commit/38b55e39cca03e54da1f38849b066b36e03ba240

BUT, it may have been a pre-existing issue and because JQ is not well tested or used and no one noticed.

alexec avatar Nov 22 '22 12:11 alexec

The error here is actually not what you think it is is. The command that does not exist is not jq. The first arguments of exec.Command must be a binary, not a shell script.

@rohankmr414 the fix is more involved. When it used sh it could use a pipe. We cannot do this now. Instead, do this in memory.

  1. Run a kubectl and capture output into a output := &bytes.Buffer{}; cmd.Stdout = output.
  2. Run jq and set cmd.Stdin = output.

alexec avatar Nov 22 '22 12:11 alexec