ArgoCD doesn't pass environment variables to kustomize
Checklist:
- [x] I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
- [x] I've included steps to reproduce the bug.
- [x] I've pasted the output of
argocd version.
Describe the bug
When I specify an environment variable for the repo server, the kustomize process is not passed those environment variables. This seemingly makes it impossible to set e.g.
- HELM_CACHE_HOME
- HELM_CONFIG_HOME
- HELM_DATA_HOME
so that kustomize can pick up e.g. a helm plugin I've installed using an initContainer. Instead, it seems to inject (or allows kustomize to set) some variables
In the logs below, the example failure has these environment variables despite them being set to another value in the k8s yaml
- HELM_CONFIG_HOME=/tmp/kustomize-helm-818165465/helm
- HELM_CACHE_HOME=/tmp/kustomize-helm-818165465/helm/.cache
- HELM_DATA_HOME=/tmp/kustomize-helm-818165465/helm/.data
To be explicit, we are following the ArgoCD docs at https://argo-cd.readthedocs.io/en/stable/user-guide/helm/#using-initcontainers.
We have been able to exec into the repo server container and were able to run e.g. helm pull on the relevant chart that is failing when using kustomize
To Reproduce
Deploy ArgoCD using the community helm chart, with the following values.yaml
repoServer:
autoscaling:
enabled: true
minReplicas: 2
# https://argo-cd.readthedocs.io/en/stable/user-guide/helm/#using-initcontainers
env:
- name: HELM_CACHE_HOME
value: /helm-working-dir/cache # we had the same results setting all three variables to `/helm-working-dir`
- name: HELM_CONFIG_HOME
value: /helm-working-dir/config
- name: HELM_DATA_HOME
value: /helm-working-dir/data
initContainers:
- name: helm-s3-authentication
image: alpine/helm:3.8.1
command: [ "/bin/sh", "-c" ]
args:
- apk --no-cache add curl bash;
apk upgrade;
helm plugin install https://github.com/hypnoglow/helm-s3.git;
helm repo add chart_repo s3://{redacted}; # this passes, because the s3 plugin is discoverable at this time
volumeMounts:
- name: helm-working-dir
mountPath: /helm-working-dir
env:
- name: HELM_CACHE_HOME
value: /helm-working-dir/cache
- name: HELM_CONFIG_HOME
value: /helm-working-dir/config
- name: HELM_DATA_HOME
value: /helm-working-dir/data
And use kustomize to hydrate a helm chart stored in an s3 bucket (which is achieved using the helm s3 plugin)
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: {redacted}
valuesFile: values.yaml
releaseName: {redacted}
version: {redacted}
repo: s3://{redacted}
namespace: {redacted}
Expected behavior
the environment variables would be picked up, the helm s3 plugin would be discovered and the application would deploy (authenticating using IRSA configured in the background)
Version
argocd: v2.8.4+c279299
BuildDate: 2023-09-13T19:12:09Z
GitCommit: c27929928104dc37b937764baf65f38b78930e59
GitTreeState: clean
GoVersion: go1.20.6
Compiler: gc
Platform: linux/amd64
Logs
Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = Manifest generation error (cached): `kustomize build <path to cached source>/{redacted} --enable-helm` failed exit status 1: Error: Error: could not find protocol handler for: s3 : unable to run: 'helm pull --untar --untardir <path to cached source>/{redacted} --repo s3://{redacted} {redacted} --version {redacted}' with env=[HELM_CONFIG_HOME=/tmp/kustomize-helm-818165465/helm HELM_CACHE_HOME=/tmp/kustomize-helm-818165465/helm/.cache HELM_DATA_HOME=/tmp/kustomize-helm-818165465/helm/.data] (is 'helm' installed?): exit status 1
We are facing the same issue , is there a workaround available?
We've been propagating helmGlobals to every single application we deploy. It does work but leads to the above error and a confused developer when they don't know the relatively arcane solution to this problem
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: REDACTED
helmCharts: [REDACTED]
helmGlobals:
configHome: /helm-working-dir