traefik-helm-chart icon indicating copy to clipboard operation
traefik-helm-chart copied to clipboard

podAntiAffinity example does not work

Open lanmarti opened this issue 4 years ago • 4 comments

Welcome!

  • [X] Yes, I've searched similar issues on GitHub and didn't find any.
  • [X] Yes, I've searched similar issues on the Traefik community forum and didn't find any.

What version of the Traefik's Helm Chart are you using?

10.1.1

What version of Traefik are you using?

2.4.9

What did you do?

Attempted to use predefined anti-affinity block given as an example in values.yaml

  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - {{ template "traefik.name" . }}
      topologyKey: failure-domain.beta.kubernetes.io/zone

What did you see instead?

Helm fails on {{ template "traefik.name" }} with error message Error: failed to parse values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{"template \"traefik.name\" .":interface {}(nil)}

What is your environment & configuration?

Kubernetes 1.20 Helm v3.6.0

Additional Information

I'm attempting to add pod anti-affinity to the deployment, while not using hostnetwork. Goal is to get pods scheduled on different nodes to increase availability. I would like to be able to set the selectors for the affinity rules without having to hardcode any values.

I don't use helm often, but from what I've gathered through some quick google searches, Helm does not like it when trying to use variables in the values.yaml file.

Basically, how can one add podAntiAffinity rules that dynamically fill in the app.kubernetes.io/instance and/or app.kubernetes.io/name selectors?

lanmarti avatar Aug 11 '21 08:08 lanmarti

Indeed. Also, the label matched app would not be set either (should be app.kubernetes.io/name). Though your error would suggest we can not re-use variables generated in templates/_helpers.tpl, setting values.

One way to get it working would be to set arbitrary labels:

deployment:
  podLabels:
    foo: bar

Then use that label setting your anti-affinity:

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: foo
          operator: In
          values:
          - bar
      topologyKey: kubernetes.io/hostname

faust64 avatar Aug 11 '21 10:08 faust64

I'm trying to avoid hardcoding labels like this as it might cause issues with our deployment tools. While the release name can easily be configured through our tools, overriding those specific labels is a lot less convenient. If no other options are possible, this is indeed how I will handle the labels.

I'm not very familiar with Helm, but would something akin to the following be possible? values.yaml:

podAntiAffinity:
  # mutually exclusive with affinity
  enabled: true # default false
affinity: {} # mutually exclusive with podAntiAffinity

_podtemplate.tpl

affinity:
{{- if .Values.podAntiAffinity.enabled }}
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
        - labelSelector:
            matchLabels:
              - app.kubernetes.io/name: {{ template "traefik.name" . }}
              - helm.sh/chart: {{ template "traefik.chart" . }}
              - app.kubernetes.io/managed-by: {{ .Release.Service }}
              - app.kubernetes.io/instance: {{ .Release.Name }}
          topologyKey: kubernetes.io/hostname
        weight: 100
  {{- else }}
    {{- with .Values.affinity }}
      {{- toYaml . | nindent 8 }}
    {{- end }}
{{ end -}}

Basically: if podAntiAffinity.enabled, create a podAntiAffinity rule matching the default labels. If it is not enabled, add affinity rules as defined in the affinity field.

By introducing a new podAntiAffinity field instead of modifying the existing affinity field, existing users of the affinity field should be unaffected.

If this would be possible, perhaps podAntiAffinity could have a field to specify whether the podAntiAffinity should be preferredDuringScheduling or requiredDuringScheduling.

As I don't use Helm often, I have no idea if this is feasible, let alone a proper solution.

lanmarti avatar Aug 11 '21 11:08 lanmarti

Maybe, though the tolopogyKey for the initial rule may not always be kubernetes.io/hostname -- in some cloud-hosted cluster, you would probably rely on something like a zone -- maybe add another variable for this

As you point out: being able to switch from preferred to required affinities could make sense as well

And keeping a case with .Values.affinity: once again, we may wonder which labels could be used, without "hardcoding labels". Then we're not really fixing anything, rather addressing a specific case while introducing complexity -- which, to be honest, remains the most common use case for affinities.

Sure the current sample is broken, though I'm not convinced it's a reason to fix templates that would otherwise work. The current Chart would not lock you into any scenario: you could set antiAffinities OR affinities, OR both. Preferred, or required affinities, or both.

If you don't want to add podLabels, knowing that you are applying the official Charts, then we could assume your {{ Release.name }} to be traefik in all cases ... Thus, another take on it would be to set your affinities rule with:

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app.kubernetes.io/instance
          operator: In
          values:
          - traefik
      topologyKey: kubernetes.io/hostname

faust64 avatar Aug 11 '21 16:08 faust64

I took a look at how the Keycloak helm chart handles this.

They process the affinity field as a string instead of a map, the following is their default value for affinity:

# Pod affinity
affinity: |
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            {{- include "keycloak.selectorLabels" . | nindent 10 }}
          matchExpressions:
            - key: app.kubernetes.io/component
              operator: NotIn
              values:
                - test
        topologyKey: kubernetes.io/hostname
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              {{- include "keycloak.selectorLabels" . | nindent 12 }}
            matchExpressions:
              - key: app.kubernetes.io/component
                operator: NotIn
                values:
                  - test
          topologyKey: failure-domain.beta.kubernetes.io/zone

It seems that by taking the value of 'affinity' as a string and processing it through the tpl function (https://github.com/codecentric/helm-charts/blob/master/charts/keycloak/templates/statefulset.yaml#L166-L169) one can use {{ include }} in the affinity rules.

For completion's sake, keycloak.selectorLabels is defined in their _helpers.tpl:

{{/*
Common labels
*/}}
{{- define "keycloak.labels" -}}
helm.sh/chart: {{ include "keycloak.chart" . }}
{{ include "keycloak.selectorLabels" . }}
app.kubernetes.io/version: {{ .Values.image.tag | default .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "keycloak.selectorLabels" -}}
app.kubernetes.io/name: {{ include "keycloak.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

With Keycloak's approach, if you change the release name, the selector labels for the affinity rules get adjusted accordingly. Furthermore, people who desire high availability simply have to change the number of replicas, and the default affinity rules will do a decent job of spreading out the pods. People with more specific needs can still adjust affinity rules accordingly.

Providing a similar approach for the current traefik chart would be a breaking change (affinity changes from map to string), so I'm guessing you aren't keen on switching to a similar approach. Therefore, would there be a way for me to override the way affinity is handled now? E.g. could I override the affinity rules generated for the Traefik deployment by using this chart as a subchart somehow? I'm more accustomed to Kustomize than Helm, so I'm not quite sure what can and can't be changed through templates etc.

lanmarti avatar Aug 17 '21 09:08 lanmarti