support additional methods of providing credentials in helm chart installation
Is your feature request related to a problem? Please describe.
tldr; as far as I can tell the only way to install this chart is by providing credentials in-line with a helm install, which precludes secrets handling solutions like secrets-store-csi-driver. As we currently rely on the secrets-store-csi-driver to handle secrets for our GitOps deployments, we effectively cannot deploy this chart.
My findings:
As a newbie to SumoLogic and this chart, I tried to implement this chart in a GitOps fashion (ArgoCD) by leveraging our current secrets-store-csi-driver+secrets-store-csi-driver-provider-aws configuration instead of manually providing credentials on a CLI.
The Secrets-Store-CSI combo exposes secrets from off-cluster (e.g. SecretsManager) and provisions Kubernetes secrets as volumes when a workload with permission requests it. A podspec mounts the volume, then uses something like a secretKeyRef to get the value of the secret.
I was able to hack around some bits of the credentials handling by creating the secretProviderClass and supplying values to this chart such as:
fluentd:
logs:
extraEnvVars:
- name: SUMOLOGIC_ACCESSID
valueFrom:
secretKeyRef:
name: sumologic-secrets
key: sumologicKeyId
- name: SUMOLOGIC_ACCESSKEY
valueFrom:
secretKeyRef:
name: sumologic-secrets
key: sumologicAccessKey
extraVolumes:
- name: sumologic-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "sumologic-secrets"
extraVolumeMounts:
- name: sumologic-secrets
mountPath: "/mnt/secrets-store"
readOnly: true
This mostly seemed to work, except I suddenly realized there's a bunch of terraform that has to run to create a hardcoded secret name. That secret never gets created because the shell script that runs the terraform to create it is run by a job that cannot be configured to obtain secrets in this way.
Describe the solution you'd like
I would like to be able to use this chart in a way that allows me to provide secrets in more flexible ways -- in this case, by using a volume mount + secretKeyRef.
Ideally, I'd only have to define the method of obtaining the secret once, and all components will know how to use it.
Describe alternatives you've considered
I tried hacking around passing the credentials in-line by explicitly configuring volumes and volumeMounts in the fluentd configurations, to no avail.
It seems possible that a quick solution would be to configure this job to use the same volume/volumeMount/secretKeyref configurations as defined in the component configuration (fluentd in this case).
Additional context
I am only configuring logs in this case. I have disabled all other telemetry collection, which is why I focus on the .Values.fluentd path in this ticket.
[EDIT] I spoke too soon. My solution appeared to work due to the left over secret/sumologic, so I mistakenly thought the job performed its function. That isn't true and I currently don't have a good solution for this (the below only partially works for me - the way the chart maps in IAM roles for service accounts doesn't work as expected, I'll see if I can chase that down too).
I was able to verify that I can use the secrets-store-csi values above after I patch the job.yaml, to successfully use this chart using volume-based secrets.
Assuming the source repo is checked out at /tmp/sumologic-kubernetes-collection:
% cat sumologic-kubernetes-collection.patch
--- /tmp/sumologic-kubernetes-collection/deploy/helm/sumologic/templates/setup/job.yaml 2022-05-31 10:12:08.952122000 -0600
+++ sumologic-kubernetes-collection/deploy/helm/sumologic/templates/setup/job.yaml 2022-05-31 10:30:45.025797000 -0600
@@ -41,6 +41,9 @@
configMap:
name: {{ template "sumologic.metadata.name.setup.configmap-custom" . }}
defaultMode: 0777
+{{- if .Values.fluentd.logs.extraVolumes }}
+{{ toYaml .Values.fluentd.logs.extraVolumes | indent 6 }}
+{{- end }}
containers:
- name: setup
image: {{ .Values.sumologic.setup.job.image.repository }}:{{ .Values.sumologic.setup.job.image.tag }}
@@ -53,6 +56,9 @@
mountPath: /etc/terraform
- name: custom
mountPath: /customer-scripts
+{{- if .Values.fluentd.logs.extraVolumeMounts }}
+{{ toYaml .Values.fluentd.logs.extraVolumeMounts | indent 8 }}
+{{- end }}
{{- if .Values.sumologic.envFromSecret }}
envFrom:
- secretRef:
@@ -75,6 +81,9 @@
- name: DEBUG_MODE
value: "true"
{{- end }}
+{{- if .Values.fluentd.logs.extraEnvVars }}
+{{ toYaml .Values.fluentd.logs.extraEnvVars | nindent 8 }}
+{{- end }}
{{ end }}
securityContext:
runAsUser: 999
I'm happy to file a PR for this, but I suspect this solution is a bit narrow for this project. Please let me know if I can help here.
Thank you for reporting the issue. We plan to change method of providing credentials.
Does the plan to change the method of providing credentials have a speculative timeline? Thank you!
@ballensans In this pull request we introduced the change to store sumo credential in secret. Does it help you?
@kkujawa-sumo unfortunately, that PR is not sufficient. Using the secrets-store-csi-driver requires presenting the secret to the container as mounted volume so that the Container Storage Interface (CSI) is accessed when creating the container.
As @mars64 proposed conceptually in the patch above, being able to specify extraVolumes for the setup container would provide a mechanism to get the secrets-store-csi-driver to bring the secret into the cluster/node/container. Once that happens, the secret will be available in the cluster, and the envFromSecret approach could work.
Failing that, for a gitops / ArgoCD type of deployment, I suspect there'd need to be a custom "Sumo Logic Prerequisites" application that gets deployed to create the namespace and a pod that mounts the secret from the secrets-store-csi-driver. That strikes me as unnecessarily clumsy.
@mars64 - I was able to get this deployed with ArgoCD by creating a separate Application which has a deployment that mounts the secrets-store-csi-driver volume and syncs the secret into the cluster. ArgoCD deploys that application first, then the envFromSecret approach from PR #2466 has an accessible secret. It's clumsy, but it's a viable work-around.
@mars64 - I was able to get this deployed with ArgoCD by creating a separate Application which has a deployment that mounts the secrets-store-csi-driver volume and syncs the secret into the cluster. ArgoCD deploys that application first, then the
envFromSecretapproach from PR #2466 has an accessible secret. It's clumsy, but it's a viable work-around.
Cool workaround, thanks for the share.
Judging by the merged date, it seems like #2466 should be in tag v2.15.0. I'm currently stuck on v2.9.1, so I'm using a double bootstrap: I manually install providing the secrets in-line, then delete the helm release (this leaves the k8s secret), then I re-apply the chart with ArgoCD.