litmus icon indicating copy to clipboard operation
litmus copied to clipboard

Error Executing Litmus tests via Litmus portal

Open PoojaS4444 opened this issue 2 years ago • 1 comments

What happened:

I imported this workflow.yaml file to delete pod of app "telehealth" in "smartedge-apps" namespace. But its showing "Failed to create a new workflow error". I have shared the YAML file. Please check.

What you expected to happen:

I expected it to schedule Litmus test successfully

Where can this issue be corrected? (optional)

Litmus Portal

How to reproduce it (as minimally and precisely as possible):

  1. Login to Litmus Portal
  2. In Home, Select Schedule a workflow
  3. Select self agent and Next
  4. Select workflow using YAML
  5. Select YAML from the device and press Next
  6. In workflow Settings, select Next
  7. In Schedule a new Litmus workflow, select schedule now
  8. Next, Select Finish

Anything else we need to know?:

image

Pasting the workflow.YAML file

apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: argowf-chaos-pod-delete- namespace: litmus labels: subject: "{{workflow.parameters.appNamespace}}_telehealth" spec: entrypoint: argowf-chaos serviceAccountName: argo-chaos securityContext: runAsUser: 1000 runAsNonRoot: true arguments: parameters: - name: adminModeNamespace value: "litmus" - name: appNamespace value: "smartedge-apps" templates: - name: argowf-chaos steps: - - name: install-experiment template: install-experiment - - name: run-chaos template: run-chaos - - name: revert-chaos template: revert-chaos

- name: install-experiment
  inputs:
    artifacts:
      - name: install-experiment
        path: /tmp/pod-delete.yaml
        raw:
          data: |
            apiVersion: litmuschaos.io/v1alpha1
            description:
              message: |
                Deletes a pod belonging to a deployment/statefulset/daemonset
            kind: ChaosExperiment
            metadata:
              name: pod-delete
            spec:
              definition:
                scope: Namespaced
                permissions:
                  - apiGroups:
                      - ""
                      - "apps"
                      - "batch"
                      - "litmuschaos.io"
                    resources:
                      - "deployments"
                      - "jobs"
                      - "pods"
                      - "pods/log"
                      - "events"
                      - "configmaps"
                      - "chaosengines"
                      - "chaosexperiments"
                      - "chaosresults"
                    verbs:
                      - "create"
                      - "list"
                      - "get"
                      - "patch"
                      - "update"
                      - "delete"
                  - apiGroups:
                      - ""
                    resources: 
                      - "nodes"
                    verbs:
                      - "get"
                      - "list"
                image: "litmuschaos/go-runner:latest"
                imagePullPolicy: Always
                args:
                - -c
                - ./experiments -name pod-delete
                command:
                - /bin/bash
                env:
                - name: TOTAL_CHAOS_DURATION
                  value: '80'
                # Period to wait before and after injection of chaos in sec
                - name: RAMP_TIME
                  value: '0'
                # provide the kill count
                - name: KILL_COUNT
                  value: '1'
                - name: FORCE
                  value: 'true'
                - name: CHAOS_INTERVAL
                  value: '60'
                - name: LIB
                  value: 'litmus'    
                labels:
                  name: pod-delete
  container:
    image: litmuschaos/k8s:latest
    command: [sh, -c]
    args:
      [
        "kubectl apply -f /tmp/pod-delete.yaml -n {{workflow.parameters.adminModeNamespace}}",
      ]

- name: run-chaos
  inputs:
    artifacts:
      - name: run-chaos
        path: /tmp/chaosengine.yaml
        raw:
          data: |
            apiVersion: litmuschaos.io/v1alpha1
            kind: ChaosEngine
            metadata:
              name: telehealth-pod-delete-chaos
              namespace: {{workflow.parameters.adminModeNamespace}}
              labels:
                context: "{{workflow.parameters.appNamespace}}_telehealth"
            spec:
              appinfo:
                appns: "smartedge-apps"
                applabel: "app=telehealth,pod-template-hash=78964954dc"
                appkind: daemonset
              jobCleanUpPolicy: retain
              engineState: 'active'
              chaosServiceAccount: litmus-admin
              experiments:
                - name: pod-delete
                  spec:
                    components:
                      env:
                        - name: TOTAL_CHAOS_DURATION
                          value: "80"
                        - name: CHAOS_INTERVAL
                          value: "60"
                        - name: FORCE
                          value: "true"
  container:
    image: litmuschaos/litmus-checker:latest
    args: ["-file=/tmp/chaosengine.yaml","-saveName=/tmp/engine-name"]

- name: revert-chaos
  container:
    image: litmuschaos/k8s:latest
    command: [sh, -c]
    args:
      [
        "kubectl delete chaosengine telehealth-pod-delete-chaos -n {{workflow.parameters.adminModeNamespace}}",
      ]

PoojaS4444 avatar Jul 18 '22 00:07 PoojaS4444

Hi, Can i please get some help on this...

PoojaS4444 avatar Jul 23 '22 12:07 PoojaS4444

Hi @PoojaS4444, sorry for not being able to assist on this as we usually troubleshoot in the litmus community channel in the Kubernetes Slack. Would you please join here and ask your question? https://k8s.slack.com/

neelanjan00 avatar Sep 22 '22 12:09 neelanjan00