charts icon indicating copy to clipboard operation
charts copied to clipboard

Using affinity in the cassandra helm chart

Open kobihikri opened this issue 2 years ago • 6 comments

Name and Version

bitnami/cassandra:9.2.8

What steps will reproduce the bug?

Try to pass a value to be used as "affinity" in the chart.

Expected behavior: affinity is set

Actual behavior: An error is reported

Are you using any custom parameters or values?

I am passing affinity as following:

{"podAntiAffinity": {"preferredDuringSchedulingIgnoredDuringExecution": [{"podAffinityTerm": {"labelSelector": {"matchLabels": [{"app.kubernetes.io/instance": "cassandra"}, {"app.kubernetes.io/name": "cassandra"}]}}, "namespaces": ["cassandra"], "topologyKey": "kubernetes.io/hostname"}]}}

What is the expected behavior?

Affinity should be set to:

affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              app.kubernetes.io/instance: cassandra
              app.kubernetes.io/name: cassandra
          namespaces:
            - cassandra
          topologyKey: kubernetes.io/hostname
        weight: 1

What do you see instead?

I am receiving the following error:

| time="2022-07-20T12:31:39Z" level=fatal msg="rpc error: code = InvalidArgument desc = application spec for cassandra is invalid: InvalidSpecError: Unable to generate manifests in 3rd-party-charts/cassandra: rpc error: code = Unknown desc = `helm template . --name-template cassandra --namespace cassandra --kube-version 1.22 --set dbUser.password=password --set externalAccess.enabled=true --set externalAccess.service.type=LoadBalancer --set externalAccess.service.ports.external=9044 --set externalAccess.autoDiscovery.enabled=true --set tolerations=[{effect: NoSchedule\\, key: dedicated\\, operator: Equal\\, value: cassandra}] --set affinity={podAntiAffinity: {preferredDuringSchedulingIgnoredDuringExecution: [{podAffinityTerm: {labelSelector: {matchLabels: [{app.kubernetes.io/instance: cassandra}\\, {app.kubernetes.io/name: cassandra}]}}\\, namespaces: [cassandra]\\, topologyKey: kubernetes.io/hostname}]}} --set replicaCount=1 --set dbUser.user=admin --set service.type=LoadBalancer --set serviceAccount.create=true --set rbac.create=true --set nodeSelector.nodegroup=cassandra --api-versions admissionregistration.k8s.io/v1 --api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --api-versions apiextensions.k8s.io/v1 --api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --api-versions apiregistration.k8s.io/v1 --api-versions apiregistration.k8s.io/v1/APIService --api-versions apps/v1 --api-versions apps/v1/ControllerRevision --api-versions apps/v1/DaemonSet --api-versions apps/v1/Deployment --api-versions apps/v1/ReplicaSet --api-versions apps/v1/StatefulSet --api-versions argoproj.io/v1alpha1 --api-versions argoproj.io/v1alpha1/AppProject --api-versions argoproj.io/v1alpha1/Application --api-versions argoproj.io/v1alpha1/ApplicationSet --api-versions autoscaling/v1 --api-versions autoscaling/v1/HorizontalPodAutoscaler --api-versions autoscaling/v2beta1 --api-versions autoscaling/v2beta1/HorizontalPodAutoscaler --api-versions autoscaling/v2beta2 --api-versions autoscaling/v2beta2/HorizontalPodAutoscaler --api-versions batch/v1 --api-versions batch/v1/CronJob --api-versions batch/v1/Job --api-versions batch/v1beta1 --api-versions batch/v1beta1/CronJob --api-versions certificates.k8s.io/v1 --api-versions certificates.k8s.io/v1/CertificateSigningRequest --api-versions coordination.k8s.io/v1 --api-versions coordination.k8s.io/v1/Lease --api-versions crd.k8s.amazonaws.com/v1alpha1 --api-versions crd.k8s.amazonaws.com/v1alpha1/ENIConfig --api-versions discovery.k8s.io/v1 --api-versions discovery.k8s.io/v1/EndpointSlice --api-versions discovery.k8s.io/v1beta1 --api-versions discovery.k8s.io/v1beta1/EndpointSlice --api-versions events.k8s.io/v1 --api-versions events.k8s.io/v1/Event --api-versions events.k8s.io/v1beta1 --api-versions events.k8s.io/v1beta1/Event --api-versions flowcontrol.apiserver.k8s.io/v1beta1 --api-versions flowcontrol.apiserver.k8s.io/v1beta1/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta1/PriorityLevelConfiguration --api-versions networking.k8s.io/v1 --api-versions networking.k8s.io/v1/Ingress --api-versions networking.k8s.io/v1/IngressClass --api-versions networking.k8s.io/v1/NetworkPolicy --api-versions node.k8s.io/v1 --api-versions node.k8s.io/v1/RuntimeClass --api-versions node.k8s.io/v1beta1 --api-versions node.k8s.io/v1beta1/RuntimeClass --api-versions policy/v1 --api-versions policy/v1/PodDisruptionBudget --api-versions policy/v1beta1 --api-versions policy/v1beta1/PodDisruptionBudget --api-versions policy/v1beta1/PodSecurityPolicy --api-versions rbac.authorization.k8s.io/v1 --api-versions rbac.authorization.k8s.io/v1/ClusterRole --api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --api-versions rbac.authorization.k8s.io/v1/Role --api-versions rbac.authorization.k8s.io/v1/RoleBinding --api-versions scheduling.k8s.io/v1 --api-versions scheduling.k8s.io/v1/PriorityClass --api-versions storage.k8s.io/v1 --api-versions storage.k8s.io/v1/CSIDriver --api-versions storage.k8s.io/v1/CSINode --api-versions storage.k8s.io/v1/StorageClass --api-versions storage.k8s.io/v1/VolumeAttachment --api-versions storage.k8s.io/v1beta1 --api-versions storage.k8s.io/v1beta1/CSIStorageCapacity --api-versions v1 --api-versions v1/ConfigMap --api-versions v1/Endpoints --api-versions v1/Event --api-versions v1/LimitRange --api-versions v1/Namespace --api-versions v1/Node --api-versions v1/PersistentVolume --api-versions v1/PersistentVolumeClaim --api-versions v1/Pod --api-versions v1/PodTemplate --api-versions v1/ReplicationController --api-versions v1/ResourceQuota --api-versions v1/Secret --api-versions v1/Service --api-versions v1/ServiceAccount --api-versions vpcresources.k8s.aws/v1beta1 --api-versions vpcresources.k8s.aws/v1beta1/SecurityGroupPolicy --include-crds` failed exit status 1: Error: failed parsing --set data: key map \", {app\" has no value"

Additional information

The actual helm calls are performed by argocd app create:

argocd app create --upsert cassandra --repo https://****@github.com/****/infra-helm-repository/ --path 3rd-party-charts/cassandra --revision master --dest-namespace cassandra --dest-server https://kubernetes.default.svc --auto-prune --sync-policy automated -p replicaCount=1 -p dbUser.user=admin -p dbUser.password=password -p service.type=LoadBalancer -p externalAccess.enabled=true -p externalAccess.service.type=LoadBalancer -p externalAccess.service.ports.external=9044 -p externalAccess.autoDiscovery.enabled=true -p serviceAccount.create=true -p rbac.create=true -p tolerations="[{"effect": "NoSchedule", "key": "dedicated", "operator": "Equal", "value": "cassandra"}]" -p nodeSelector.nodegroup=cassandra -p affinity="{"podAntiAffinity": {"preferredDuringSchedulingIgnoredDuringExecution": [{"podAffinityTerm": {"labelSelector": {"matchLabels": [{"app.kubernetes.io/instance": "cassandra"}, {"app.kubernetes.io/name": "cassandra"}]}}, "namespaces": ["cassandra"], "topologyKey": "kubernetes.io/hostname"}]}}"

kobihikri avatar Jul 20 '22 12:07 kobihikri

I am not able to reproduce the issue; please see below how I am deploying the chart with the podAntiAffinity configuration.

  • I modified the default values.yaml implementing those changes:
-affinity: {}
+affinity:
+  podAntiAffinity:
+    preferredDuringSchedulingIgnoredDuringExecution:
+      - podAffinityTerm:
+          labelSelector:
+            matchLabels:
+              app.kubernetes.io/instance: cassandra
+              app.kubernetes.io/name: cassandra
+          namespaces:
+            - cassandra
+          topologyKey: kubernetes.io/hostname
+        weight: 1
  • Then, I rendered the helm chart as follows:
$ helm template bitnami/cassandra -f values.yaml -s templates/statefulset.yaml
  • Taking a look at the resulting statefulset, we can see how the affinity are set to the values I provided:
# Source: cassandra/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-cassandra
  namespace: "default"
  labels:
    app.kubernetes.io/name: cassandra
    helm.sh/chart: cassandra-9.2.8
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: cassandra
      app.kubernetes.io/instance: release-name
  serviceName: release-name-cassandra-headless
  podManagementPolicy: OrderedReady
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: cassandra
        helm.sh/chart: cassandra-9.2.8
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
    spec:

      serviceAccountName: release-name-cassandra
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/instance: cassandra
                  app.kubernetes.io/name: cassandra
              namespaces:
              - cassandra
              topologyKey: kubernetes.io/hostname
            weight: 1
      securityContext:
        fsGroup: 1001
      containers:
        - name: cassandra
          command:
            - bash
            - -ec
            - |
              # Node 0 is the password seeder
              if [[ $POD_NAME =~ (.*)-0$ ]]; then
                  echo "Setting node as password seeder"
                  export CASSANDRA_PASSWORD_SEEDER=yes
              else
                  # Only node 0 will execute the startup initdb scripts
                  export CASSANDRA_IGNORE_INITDB_SCRIPTS=1
              fi
              /opt/bitnami/scripts/cassandra/entrypoint.sh /opt/bitnami/scripts/cassandra/run.sh
          image: docker.io/bitnami/cassandra:4.0.5-debian-11-r0
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsNonRoot: true
            runAsUser: 1001
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: CASSANDRA_CLUSTER_NAME
              value: cassandra
            - name: CASSANDRA_SEEDS
              value: "release-name-cassandra-0.release-name-cassandra-headless.default.svc.cluster.local"
            - name: CASSANDRA_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: release-name-cassandra
                  key: cassandra-password
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: CASSANDRA_USER
              value: "cassandra"
            - name: CASSANDRA_NUM_TOKENS
              value: "256"
            - name: CASSANDRA_DATACENTER
              value: dc1
            - name: CASSANDRA_ENDPOINT_SNITCH
              value: SimpleSnitch
            - name: CASSANDRA_KEYSTORE_LOCATION
              value: "/opt/bitnami/cassandra/certs/keystore"
            - name: CASSANDRA_TRUSTSTORE_LOCATION
              value: "/opt/bitnami/cassandra/certs/truststore"
            - name: CASSANDRA_RACK
              value: rack1
            - name: CASSANDRA_TRANSPORT_PORT_NUMBER
              value: "7000"
            - name: CASSANDRA_JMX_PORT_NUMBER
              value: "7199"
            - name: CASSANDRA_CQL_PORT_NUMBER
              value: "9042"
          envFrom:
          livenessProbe:
            exec:
              command:
                - /bin/bash
                - -ec
                - |
                  nodetool info | grep "Native Transport active: true"
            initialDelaySeconds: 60
            periodSeconds: 30
            timeoutSeconds: 30
            successThreshold: 1
            failureThreshold: 5
          readinessProbe:
            exec:
              command:
                - /bin/bash
                - -ec
                - |
                  nodetool status | grep -E "^UN\\s+${POD_IP}"
            initialDelaySeconds: 60
            periodSeconds: 10
            timeoutSeconds: 30
            successThreshold: 1
            failureThreshold: 5
          lifecycle:
            preStop:
              exec:
                command:
                  - bash
                  - -ec
                  - nodetool drain
          ports:
            - name: intra
              containerPort: 7000
            - name: tls
              containerPort: 7001
            - name: jmx
              containerPort: 7199
            - name: cql
              containerPort: 9042
          resources:
            limits: {}
            requests: {}
          volumeMounts:
            - name: data
              mountPath: /bitnami/cassandra

      volumes:
  volumeClaimTemplates:
    - metadata:
        name: data
        labels:
          app.kubernetes.io/name: cassandra
          app.kubernetes.io/instance: release-name
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "8Gi"
  • Paying special attention to the following section
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/instance: cassandra
                  app.kubernetes.io/name: cassandra
              namespaces:
              - cassandra
              topologyKey: kubernetes.io/hostname
            weight: 1

carrodher avatar Jul 22 '22 06:07 carrodher

Hi,

Thank you for taking the time to review and answer. My problem isn’t with using a values.yaml file, but with passing helm parameters according to the official documentation.

In particular, there is an “affinity” helm parameter which is supposed to receive a value which is then rendered in the helm chart - this is what doesn’t work for me.

Best regards, Kobi.

On Fri, 22 Jul 2022 at 9:16 Carlos Rodríguez Hernández < @.***> wrote:

I am not able to reproduce the issue; please see below how I am deploying the chart with the podAntiAffinity configuration.

  • I modified the default values.yaml implementing those changes:

-affinity: {}+affinity:+ podAntiAffinity:+ preferredDuringSchedulingIgnoredDuringExecution:+ - podAffinityTerm:+ labelSelector:+ matchLabels:+ app.kubernetes.io/instance: cassandra+ app.kubernetes.io/name: cassandra+ namespaces:+ - cassandra+ topologyKey: kubernetes.io/hostname+ weight: 1

  • Then, I rendered the helm chart as follows:

$ helm template bitnami/cassandra -f values.yaml -s templates/statefulset.yaml

  • Taking a look at the resulting statefulset, we can see how the affinity are set to the values I provided:

Source: cassandra/templates/statefulset.yamlapiVersion: apps/v1kind: StatefulSetmetadata:

name: release-name-cassandra namespace: "default" labels: app.kubernetes.io/name: cassandra helm.sh/chart: cassandra-9.2.8 app.kubernetes.io/instance: release-name app.kubernetes.io/managed-by: Helmspec: selector: matchLabels: app.kubernetes.io/name: cassandra app.kubernetes.io/instance: release-name serviceName: release-name-cassandra-headless podManagementPolicy: OrderedReady replicas: 1 updateStrategy: type: RollingUpdate template: metadata: labels: app.kubernetes.io/name: cassandra helm.sh/chart: cassandra-9.2.8 app.kubernetes.io/instance: release-name app.kubernetes.io/managed-by: Helm spec:

  serviceAccountName: release-name-cassandra
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              app.kubernetes.io/instance: cassandra
              app.kubernetes.io/name: cassandra
          namespaces:
          - cassandra
          topologyKey: kubernetes.io/hostname
        weight: 1
  securityContext:
    fsGroup: 1001
  containers:
    - name: cassandra
      command:
        - bash
        - -ec
        - |              # Node 0 is the password seeder              if [[ $POD_NAME =~ (.*)-0$ ]]; then                  echo "Setting node as password seeder"                  export CASSANDRA_PASSWORD_SEEDER=yes              else                  # Only node 0 will execute the startup initdb scripts                  export CASSANDRA_IGNORE_INITDB_SCRIPTS=1              fi              /opt/bitnami/scripts/cassandra/entrypoint.sh /opt/bitnami/scripts/cassandra/run.sh          image: docker.io/bitnami/cassandra:4.0.5-debian-11-r0
      imagePullPolicy: "IfNotPresent"
      securityContext:
        runAsNonRoot: true
        runAsUser: 1001
      env:
        - name: BITNAMI_DEBUG
          value: "false"
        - name: CASSANDRA_CLUSTER_NAME
          value: cassandra
        - name: CASSANDRA_SEEDS
          value: "release-name-cassandra-0.release-name-cassandra-headless.default.svc.cluster.local"
        - name: CASSANDRA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: release-name-cassandra
              key: cassandra-password
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: CASSANDRA_USER
          value: "cassandra"
        - name: CASSANDRA_NUM_TOKENS
          value: "256"
        - name: CASSANDRA_DATACENTER
          value: dc1
        - name: CASSANDRA_ENDPOINT_SNITCH
          value: SimpleSnitch
        - name: CASSANDRA_KEYSTORE_LOCATION
          value: "/opt/bitnami/cassandra/certs/keystore"
        - name: CASSANDRA_TRUSTSTORE_LOCATION
          value: "/opt/bitnami/cassandra/certs/truststore"
        - name: CASSANDRA_RACK
          value: rack1
        - name: CASSANDRA_TRANSPORT_PORT_NUMBER
          value: "7000"
        - name: CASSANDRA_JMX_PORT_NUMBER
          value: "7199"
        - name: CASSANDRA_CQL_PORT_NUMBER
          value: "9042"
      envFrom:
      livenessProbe:
        exec:
          command:
            - /bin/bash
            - -ec
            - |                  nodetool info | grep "Native Transport active: true"            initialDelaySeconds: 60
        periodSeconds: 30
        timeoutSeconds: 30
        successThreshold: 1
        failureThreshold: 5
      readinessProbe:
        exec:
          command:
            - /bin/bash
            - -ec
            - |                  nodetool status | grep -E "^UN\\s+${POD_IP}"            initialDelaySeconds: 60
        periodSeconds: 10
        timeoutSeconds: 30
        successThreshold: 1
        failureThreshold: 5
      lifecycle:
        preStop:
          exec:
            command:
              - bash
              - -ec
              - nodetool drain
      ports:
        - name: intra
          containerPort: 7000
        - name: tls
          containerPort: 7001
        - name: jmx
          containerPort: 7199
        - name: cql
          containerPort: 9042
      resources:
        limits: {}
        requests: {}
      volumeMounts:
        - name: data
          mountPath: /bitnami/cassandra

  volumes:

volumeClaimTemplates: - metadata: name: data labels: app.kubernetes.io/name: cassandra app.kubernetes.io/instance: release-name spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "8Gi"

  • Paying special attention to the following section

    affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: cassandra app.kubernetes.io/name: cassandra namespaces: - cassandra topologyKey: kubernetes.io/hostname weight: 1

— Reply to this email directly, view it on GitHub https://github.com/bitnami/charts/issues/11275#issuecomment-1192222649, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABB2WAJOCPNSFBEXNLG3H2TVVI4CXANCNFSM54DPR6UQ . You are receiving this because you authored the thread.Message ID: @.***>

-- May peace and love be your share. Kobi Hikri.

kobihikri avatar Jul 22 '22 06:07 kobihikri

It seems it is not an issue related to the Bitnami Cassandra Helm chart but about how the application or environment is being used/configured since the Helm chart is able to render the affinity in a proper way when passed in the values.yaml or the equivalent --set

For information regarding the application itself, customization of the content within the application, or questions about the use of technology or infrastructure; we highly recommend checking forums and user guides made available by the project behind the application or the technology.

That said, we will keep this ticket open until the stale bot closes it just in case someone from the community adds some valuable info.

carrodher avatar Jul 22 '22 08:07 carrodher

Thanks for your reply,

The helm chart uses a helm parameter which it seems is being rendered incorrectly in the helm code:

Please see the documentation here:

https://github.com/bitnami/charts/tree/master/bitnami/cassandra

[image: image.png]

And the Helm chart code here:

https://github.com/bitnami/charts/blob/master/bitnami/cassandra/templates/statefulset.yaml

[image: image.png]

According to the documentation, it should be possible to set affinity by passing a parameter value *directly *to helm, without the usage of a values.yaml file.

The code shows that there was intended to have support for that as well.

If you know that passing "affinity" as a --set works correctly, can you kindly share a sample and I will test it?

Best regards, Kobi.

On Fri, 22 Jul 2022 at 11:46 Carlos Rodríguez Hernández < @.***> wrote:

It seems it is not an issue related to the Bitnami Cassandra Helm chart but about how the application or environment is being used/configured since the Helm chart is able to render the affinity in a proper way when passed in the values.yaml or the equivalent --set

For information regarding the application itself, customization of the content within the application, or questions about the use of technology or infrastructure; we highly recommend checking forums and user guides made available by the project behind the application or the technology.

That said, we will keep this ticket open until the stale bot closes it just in case someone from the community adds some valuable info.

— Reply to this email directly, view it on GitHub https://github.com/bitnami/charts/issues/11275#issuecomment-1192334880, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABB2WAOXZHF6W7MH7XEIAB3VVJNWNANCNFSM54DPR6UQ . You are receiving this because you authored the thread.Message ID: @.***>

kobihikri avatar Jul 22 '22 09:07 kobihikri

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] avatar Aug 07 '22 01:08 github-actions[bot]

Any progress on this?

On Sun, Aug 7, 2022 at 4:34 AM github-actions[bot] @.***> wrote:

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

— Reply to this email directly, view it on GitHub https://github.com/bitnami/charts/issues/11275#issuecomment-1207311306, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABB2WAOZGZ5I5LVMD3VHRYLVX4HCXANCNFSM54DPR6UQ . You are receiving this because you authored the thread.Message ID: @.***>

-- May peace and love be your share. Kobi Hikri.

kobihikri avatar Aug 07 '22 06:08 kobihikri

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

github-actions[bot] avatar Aug 13 '22 01:08 github-actions[bot]