argo-cd
argo-cd copied to clipboard
Helm lookup Function Support
Hello,
Happy new year Guys !!
So, I have this requirement to build the imagePath by reading the dockerRegistryIP value from configMap, so that I need not ask user explicitly where the registry is located.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "helm-guestbook.fullname" . }}
spec:
containers:
- name: {{ .Chart.Name }}
image: {{ printf "%s/%s:%s" **$dockerRegistryIP** .Values.image.repository .Values.image.tag }}
Helm3 has introduced support for this where they introduced a lookup function through which configMap can be read at runtime like this,
{{ (lookup "v1" "ConfigMap" "default" "my-configmap").data.registryURL}}
But the lookup function will return nil when templates are rendered using "helm dryrun" or "helm template" as a result when you parse a field on nil, you will see an exception like this,
"nil pointer evaluating interface {}.registryURL Use --debug flag to render out invalid YAML"
The solution which was proposed on stack overflow is to use "helm template --validate" instead of "helm template"
Can you guys add support for this ?
Right now am populating docker-registry-ip like this, but with this kustomize-plugin approach am loosing the ability to render values.yaml file as an config screen through which user can override certain values i.e. the fix to solve one issue has lead to an other issue
kubectl -n argocd get cm argocd-cm -o yaml
apiVersion: v1
data:
configManagementPlugins: |
- name: kustomized-helm
generate:
command: [sh, -c]
args: ["DOCKER_REG_IP=$(kubectl -n registry get svc registry -o jsonpath={.spec.clusterIP}) && sed -i \"s/DOCKER_REGISTRY_IP/$DOCKER_REG_IP/g\" kustomization.yaml | helm template $ARGOCD_APP_NAME --namespace $ARGOCD_APP_NAMESPACE . > all.yaml && kustomize build"]
- name: kustomized
generate:
command: [sh, -c]
args: ["DOCKER_REG_IP=$(kubectl -n registry get svc registry -o jsonpath={.spec.clusterIP}) && sed -i \"s/DOCKER_REGISTRY_IP/$DOCKER_REG_IP/g\" kustomization.yaml | kustomize build"]
Note that even if we allowed configuring Argo CD to append the --validate
arg when running the helm template
command, the repo-server would still need to be given API server credentials (i.e. mount a service account token) in order to perform the lookup
. We would not ever allow kubernetes credentials to repo-server as a default (though you are welcome to modify your deployment in your environment) so there would not be a value in adding a --validate
option configuration.
Since you would anyways need a customized repo-server, you can already accomplish this today using a wrapper script around the helm
binary which appends the argument in the script (coupled with the service account given to the repo server).
@jessesuen, I guess this workaround is possible only with in-cluster configuration, and wont work for the external ones.
Ah yes, you are right about that unfortunately.
@jessesuen just coming across this issue and I'm running into the same. There is duplicate issue here as well: https://github.com/argoproj/argo-cd/issues/3640
You mentioned two things to accomplish as a work around for Argo not supporting this:
- Add a service account for Argo + mount the token -- This is straight forward and would be easy to implement.
- "using a wrapper script around the helm binary which appends the argument in the script" -- This I don't really get.
Can you expand more on the wrapper script? How would one inject that into a standard Argo deployment?
Hi @jessesuen, @Gowiem,
any updates on this?
Thanks in advance, Dave
@dvcanton I tried the Argo plugin / wrapper script approach that @jessesuen mentioned after asking about it directly in the Argo Slack. You can find more about that by looking at the plugins documentation.
Unfortunately, that solution seemed overly hacky and pretty esoteric to me and my team. Instead we've now moved towards not using lookup
in our charts and copy / pasting certain configuration manually instead. It's not great and I wish Argo would support this, but doesn't seem like there is enough momentum unfortunately.
A lot of charts use build-in objects such as Capabilities to provide backward compatibility for old APIs. Capabilities.APIVersions works properly only with --validate flag because without this flag it returns only API versions without available resources. There is an example in grafana chart: https://github.com/grafana/helm-charts/blob/main/charts/grafana/templates/ingress.yaml#L7
As about Capabilities, helm template
command supports setting capabilities manually, ref https://github.com/argoproj/argo-cd/issues/3594
@kvaps take a look for an example which I posted.
{{- $newAPI := .Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" -}}
it returns false because without --validate
flag only APIs versions are returned.
@randrusiak, it works to me:
# helm template . --set ingress.enabled=true --include-crds > /tmp/1.yaml
# helm template . --api-versions networking.k8s.io/v1/Ingress --set ingress.enabled=true --include-crds > /tmp/2.yaml
# diff -u /tmp/1.yaml /tmp/2.yaml
@@ -399,7 +399,7 @@
emptyDir: {}
---
# Source: grafana/templates/ingress.yaml
-apiVersion: extensions/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: RELEASE-NAME-grafana
@@ -417,9 +417,12 @@
paths:
- path: /
+ pathType: Prefix
backend:
- serviceName: RELEASE-NAME-grafana
- servicePort: 80
+ service:
+ name: RELEASE-NAME-grafana
+ port:
+ number: 80
my idea was that ArgoCD could provide repo-server list of api-versions from destination Kubernets API. eg:
kubectl api-versions
will return all available apiversions for the cluster.
Not sure if lookup function support can be implemented with the same simplicity, as it already requires direct access to the cluster.
@kvaps I understand how it works on the helm level, but I don't know how to pass this additional flag--api-versions networking.k8s.io/v1/Ingress
via argo manifest. It's still unclear to me. Could you explain that to me? I'd appreciate your help.
Actually my note was more likely for contributors than for users :) They could implement api-versions passing from the argocd-application-controller to the argocd-repo-server via API call.
On current stage, I think you got nothing to do. The only workaround for you is to add serviceAccount to your repo-server and use --validate
flag for helm template
(you would need to create small shell wrapper script for helm
command, or use custom plugin). Unfortunately this is less secure and would work only with the single cluster (the current one).
Another option for you is to hardcode those parameters somewhere, eg. save the output of the following command:
kubectl api-versions | awk '{printf " --api-versions " $1 } END{printf "\n"}'
And pass it to helm, using any suitable way for you, eg, you can still use wrapper script for helm
, something like that:
cat /usr/local/bin/helm
#!/bin/sh
exec /usr/local/bin/helm.bin $HELM_EXTRA_ARGS "$@"
where:
-
/usr/local/bin/helm.bin
- the original helm binary -
#HELM_EXTRA_ARGS
- extra variable for your repo-server
or using custom-plugin
We also ran into the issue with the non-working lookup function. Background is that we want to make sure that for a certain service, a secure random password is generated instead of having a hardcoded default. If desired, the use can explicitly set his own password, but most people don't. Since we are using Helm's random function, the password is newly generated with each helm upgrade (resp. helm template), so the password would not remain stable. So we use the lookup function to check if the secret holding the password already exists and only generate the password if it doesn't. So effectively, it will only be generated initially on "helm install". With the non-working lookup function in Argo, we have the issue that the password will be regenerated on each sync, wreaking quite some havoc, as you might guess.
Our goal is to keep the helmchart usage as simple as possible and require as little parameters for simple installations. So I would like to keep the "generate a secure (and stable) random password" as the default for "pure" Helm usage. Is there a way to find out in the helmchart that we are actually running inside Argo? That would allow me to react to this and add a validation that enforces explicitly setting a password in Argo-based deployments.
Any update on this issue?
Ran into this same problem today :( more context in https://cloud-native.slack.com/archives/C01TSERG0KZ/p1635024460105000
Same thing happens with aws-load-balancer-controller's mutating webhook that defines tls key, cert and CA: https://github.com/aws/eks-charts/blob/f4be91b5ae4a2959e821940a77d50dd0424841c1/stable/aws-load-balancer-controller/templates/_helpers.tpl It can't reuse the previously defined keys if it can't access the cluster, thus producing an Argo app that's always out of sync
Ran into this today :(
Do you have any timeline when the lookup will be available?
Ran into this today :(
Do you have any timeline when the lookup will be available?
It looks like you have overseen my question so I wanted to ask again. Is there any plan to get this in and when?
I run into the same issue, but maybe the lack of this function is a good reason to move to the fluxv2, which already supports it
https://github.com/fluxcd/helm-operator/issues/335
FYI, this project provides a decent workaround https://github.com/kuuji/helm-external-val
FYI, this project provides a decent workaround https://github.com/kuuji/helm-external-val
@13013SwagR how do you use this plugin inside a template for argocd? can you provide an example?
Thx
Is there a way to find out in the helmchart that we are actually running inside Argo? That would allow me to react to this and add a validation that enforces explicitly setting a password in Argo-based deployments.
In case this is still relevant for anyone (@jgoeres ?), a workaround we found for a similar issue is to perform a lookup for Namespace resources. Under 'helm template' it is empty.
Is there a way to find out in the helmchart that we are actually running inside Argo? That would allow me to react to this and add a validation that enforces explicitly setting a password in Argo-based deployments.
In case this is still relevant for anyone (@jgoeres ?), a workaround we found for a similar issue is to perform a lookup for Namespace resources. Under 'helm template' it is empty.
ciao @chkp-alexgl long time not talking, how are you? :)
Hi all,
I also run into this problem today. I had the use-case to fetch an annotations from an existing service account.
Since lookup functions does not work, I create a helm plugin to mitigate the issues. At least for me.
Project URL: https://github.com/jkroepke/helm-kubectl
It uses the helm plugin downloader syntax to perfectly integrate into ArgoCD.
Example
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cortex
spec:
project: default
source:
helm:
fileParameters:
- name: "config.alertmanager.storage.azure.user_assigned_id"
path: "kubectl://infra-cortex/sa/cortex/jsonpath={.metadata.annotations.azure\\\\.workload\\\\.identity/client-id}"
Checkout the README for installation and usage.
I was also looking for a solution to use helm lookup or maybe some equivalent solution using Kustomize. Our usecase looks like this:
- We have a single gitops repo which we apply on a whole bunch of clusters
- In this repo I preferably don't want to define a different version for each cluster (values for charts)
- Preferably I rely on the lookup function so ArgoCD can lookup the values from a ConfigMap that we ship by default on each cluster.
- This configmap also has an RBAC policy that allows reading the values.
- Preferably I would like to allow reading these values by the ArgoCD service-account only
To make it more practical:
- We use terraform to provision a cluster
- This piece of terraform also creates a configmap with SecurityGroups and SubnetGroups
- Within ArgoCD we would like to lookup those values when doing Crossplane deployments provision things like RDS etc.
From a security point of view I don't really see a problem when this is about in-cluster target. For other clusters this might require some OIDC integration I suppose.
Even if I do the helm install from my local there is not really an issue with this approach as the user who is able to do a helm install from local already has permissions on the cluster to do this (RBAC cluster-role on my configmap).
Is there any recommended workaround from argocd peeps to achieve the same kind of idea as what @marcofranssen is describing. Instead of saving values in config maps, is there an option to save the values in argocd that could be loaded inside the template somehow?
This is not ideal, but we ended up generating a cluster-variables
config map (values populated from the cluster-bootstrapper script) and installed it with the Argo helm chart (extraObjects
). This custom config map then gets mapped into the repo server as environment variables.
extraObjects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-variables
namespace: argo-cd
data:
awsAccount: "123456789"
awsRegion: "region"
clusterName: "abc"
repoServer:
envFrom:
- configMapRef:
name: cluster-variables
Finally, all data fields become available in the repo server and you can write a custom plugin to substitute and deploy helm charts. Something like:
configManagementPlugins: |
- name: helm-envsubst
init:
command: ["/bin/sh", "-c"]
args: ["helm dependency build"]
generate:
command: ["sh", "-c"]
args: ['helm template $ARGOCD_APP_NAME . --namespace $ARGOCD_APP_NAMESPACE --set ${ARGOCD_ENV_APP_VALUES_FILES} | /usr/local/bin/envsubst ']
Note: I, unfortunately, haven't had time to write the plugin in a way that it works correctly with the value files/argo app values.
Hi , @hahasheminejad in which file did you put this block of code?
repoServer:
envFrom:
- configMapRef:
name: cluster-variables
Hi @akessner ,
It is defined in the helm value file to override here https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/values.yaml#L1849
If you don't deploy ArgoCD via helm, you can add (or patch) that line to the repo server's deployment manifest.
I ran into the same issue. I am using a helm chart that auto-generates TLS secret and it checks if it exists with lookup function. The helm chart is in a git repository, so every commit to the repository triggers the autosync from ArgoCD, updating the TLS secret.