charts icon indicating copy to clipboard operation
charts copied to clipboard

Extra Args list from SCALE UI is not used for deployed app

Open VladFlorinIlie opened this issue 2 years ago • 1 comments

App Name

custom-app

SCALE Version

22.02.1

App Version

0.20.1539_5.1.31

Application Events

No Recent Events

Application Logs

Prometheus related logs (not useful for debugging this problem)
Here is an extract from the k3s kubectl get pot command:
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "ix-net",
          "interface": "eth0",
          "ips": [
              "172.16.0.168"
          ],
          "mac": "92:23:3d:15:b5:f7",
          "default": true,
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "ix-net",
          "interface": "eth0",
          "ips": [
              "172.16.0.168"
          ],
          "mac": "92:23:3d:15:b5:f7",
          "default": true,
          "dns": {}
      }]
  creationTimestamp: "2022-08-11T14:51:44Z"
  generateName: prometheus-custom-app-9ccccfd78-
  labels:
    app.kubernetes.io/instance: prometheus
    app.kubernetes.io/name: custom-app
    pod-template-hash: 9ccccfd78
  name: prometheus-custom-app-9ccccfd78-lv8tt
  namespace: ix-prometheus
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: prometheus-custom-app-9ccccfd78
    uid: 3b6ef9fe-1892-4aab-a39d-73d2f00bce0f
  resourceVersion: "201534"
  uid: 26d02a33-4682-4669-84b9-357bd9d45763
spec:
  containers:
  - env:
    - name: PUID
      value: "568"
    - name: USER_ID
      value: "568"
    - name: UID
      value: "568"
    - name: UMASK
      value: "2"
    - name: UMASK_SET
      value: "2"
    - name: PGID
      value: "568"
    - name: GROUP_ID
      value: "568"
    - name: GID
      value: "568"
    - name: NVIDIA_VISIBLE_DEVICES
      value: void
    image: prom/prometheus:latest
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 5
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      tcpSocket:
        port: 9090
      timeoutSeconds: 5
    name: prometheus-custom-app
    ports:
    - containerPort: 9090
      name: main
      protocol: TCP
    readinessProbe:
      failureThreshold: 5
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      tcpSocket:
        port: 9090
      timeoutSeconds: 5
    resources:
      limits:
        cpu: "4"
        memory: 8Gi
      requests:
        cpu: 10m
        memory: 50Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities: {}
      privileged: false
      readOnlyRootFilesystem: false
      runAsNonRoot: false
    startupProbe:
      failureThreshold: 60
      initialDelaySeconds: 10
      periodSeconds: 5
      successThreshold: 1
      tcpSocket:
        port: 9090
      timeoutSeconds: 2
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/prometheus
      name: config
    - mountPath: /prometheus
      name: db
    - mountPath: /shared
      name: shared
    - mountPath: /tmp
      name: temp
    - mountPath: /var/logs
      name: varlogs
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-lk2q2
      readOnly: true
  dnsConfig:
    options:
    - name: ndots
      value: "1"
  dnsPolicy: ClusterFirst
  enableServiceLinks: false
  initContainers:
  - command:
    - /bin/sh
    - -c
    - |
      /bin/bash <<'EOF'
      echo "Automatically correcting permissions..."
      echo "Automatically correcting permissions for /etc/prometheus..."
      if nfs4xdr_getfacl && nfs4xdr_getfacl | grep -qv "Failed to get NFSv4 ACL"; then
        echo "NFSv4 ACLs detected, using nfs4_setfacl to set permissions..."
        nfs4_setfacl -R -a A:g:568:RWX '/etc/prometheus'
      else
        echo "No NFSv4 ACLs detected, trying chown/chmod..."
        chown -R :568 '/etc/prometheus'
        chmod -R g+rwx '/etc/prometheus'
      fi
      echo "Automatically correcting permissions for /prometheus..."
      if nfs4xdr_getfacl && nfs4xdr_getfacl | grep -qv "Failed to get NFSv4 ACL"; then
        echo "NFSv4 ACLs detected, using nfs4_setfacl to set permissions..."
        nfs4_setfacl -R -a A:g:568:RWX '/prometheus'
      else
        echo "No NFSv4 ACLs detected, trying chown/chmod..."
        chown -R :568 '/prometheus'
        chmod -R g+rwx '/prometheus'
      fi
      echo "increasing inotify limits..."
      ( sysctl -w fs.inotify.max_user_watches=524288 || echo "error setting inotify") && ( sysctl -w fs.inotify.max_user_instances=512 || echo "error setting inotify")

      EOF
    image: tccr.io/truecharts/multi-init:v0.0.1@sha256:d947b94180365c8b7294b610dcd7138f7997134050a95bcd52bdef957247f33a
    imagePullPolicy: IfNotPresent
    name: prepare
    resources:
      limits:
        cpu: "4"
        memory: 8Gi
      requests:
        cpu: 10m
        memory: 50Mi
    securityContext:
      privileged: true
      runAsUser: 0
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/prometheus
      name: config
    - mountPath: /prometheus
      name: db
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-lk2q2
      readOnly: true
  nodeName: ix-truenas
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 568
    fsGroupChangePolicy: OnRootMismatch
    runAsGroup: 0
    runAsUser: 0
    supplementalGroups:
    - 568
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 10
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: shared
  - emptyDir: {}
    name: temp
  - emptyDir: {}
    name: varlogs
  - name: kube-api-access-lk2q2
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-08-11T14:51:46Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-08-11T14:51:59Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-08-11T14:51:59Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-08-11T14:51:44Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://c06d17345ff8995e549f7ea8a72977854f93053e3a4a3c0ff6cde570717377bc
    image: prom/prometheus:latest
    imageID: docker-pullable://prom/prometheus@sha256:56e7f18e05dd567f96c05046519760b356f52450c33f6e0055a110a493a41dc4
    lastState: {}
    name: prometheus-custom-app
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-08-11T14:51:47Z"
  hostIP: 192.168.1.254
  initContainerStatuses:
  - containerID: docker://7e7935d6c9318aea595e6ae467ddcfd127f05254d4942a761c339e56e4287296
    image: sha256:26499f7a2033f3437a1aa2ce3ded338eac1e21a0ebc319bf45050e6ee55296f9
    imageID: docker-pullable://tccr.io/truecharts/multi-init@sha256:d947b94180365c8b7294b610dcd7138f7997134050a95bcd52bdef957247f33a
    lastState: {}
    name: prepare
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://7e7935d6c9318aea595e6ae467ddcfd127f05254d4942a761c339e56e4287296
        exitCode: 0
        finishedAt: "2022-08-11T14:51:46Z"
        reason: Completed
        startedAt: "2022-08-11T14:51:46Z"
  phase: Running
  podIP: 172.16.0.168
  podIPs:
  - ip: 172.16.0.168
  qosClass: Burstable
  startTime: "2022-08-11T14:51:44Z"

Application Configuration

image

Tried to also split it into different arguments.

Describe the bug

The extra arguments configured in the SCALE UI are not sent to the container. The args tag is also missing from the yaml extracted from the pod.

To Reproduce

  1. Create a custom-app with a docker image of your choice
  2. Add at least one argument into the extra arguments list

Expected Behavior

The extra argumen(s) are sent to the container.

Screenshots

With the configuration specified above, the prometheus container is started only with its default arguments: image

Additional Context

I've read and agree with the following

  • [X] I've checked all open and closed issues and my issue is not there.

VladFlorinIlie avatar Aug 11 '22 17:08 VladFlorinIlie

After looking a bit into it, Looks like UI injects both command and args to .Values.controller.extraArgs and .Values.controller.command

Will see if I should flatten them one level up and/or adjust common @Ornias1993

stavros-k avatar Aug 11 '22 18:08 stavros-k

I prefer them on .Values.args

PrivatePuffin avatar Aug 12 '22 07:08 PrivatePuffin

This issue is locked to prevent necro-posting on closed issues. Please create a new issue or contact staff on discord of the problem persists

truecharts-admin avatar Feb 03 '23 13:02 truecharts-admin