helm-controller
helm-controller copied to clipboard
Support other Helm storage backends besides Secrets
Overview
This PR adds support for all Helm storage backends (see the official documentation).
Usage
The storage driver can be set through the --helm-storage-driver
flag, and its value is propagated to the NewRunner
function. The set of allowed values is [secret
, configmap
, sql
]. If the flag is unset, the value is retrieved from the environment variable HELM_DRIVER
. If the environment variable is also unset, the value simply defaults to secret
- for backwards compatibility.
Use case
The possibility to switch between backends is essential to have more flexibility in the cluster. Storing release information in Secrets may not always be the best option, mostly due to:
- Size of Helm releases information, which may exceed 1MB;
- Combined total size of cluster Secrets, which may cause disruptions in the cluster;
- Helm release location - in some use cases it could be necessary to move release information to another location for compliance.
Moving the release information, for example, to a SQL backend, would easily address these issues and allow keeping a longer history of deployments.
Testing (cluster)
The changes were tested on a local K8s cluster. I personally used a single node kind
cluster with K8s v1.27.3
.
Steps to reproduce
Create the cluster:
kind create cluster
Build a local image (I named it test-helm-controller:latest
) and load it to the kind cluster:
export IMG=test-helm-controller:latest
make docker-build
kind load image test-helm-controller:latest
Deploy the default helm-controller
and source-controller
:
make deploy
Create the namespace for the Helm release:
kubectl create namespace hello
Prepare the manifests for the HelmRepo
and HelmRelease
to use for testing. I personally used this chart for simplicity.
helmrepo.yaml
:
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: cloudecho
namespace: hello
spec:
interval: 1m0s
url: https://cloudecho.github.io/charts/
helmrelease.yaml
:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: hello
namespace: hello
spec:
chart:
spec:
chart: hello
sourceRef:
kind: HelmRepository
name: cloudecho
version: 0.1.2
values:
replicaCount: 3
interval: 1m0s
Test case 1: Backwards compatibility
The Helm release information should still be stored in Secrets when both the flag and the env variable are unset.
Deployment patch in config/manager/kustomization.yaml
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
newName: test-helm-controller
newTag: latest
Deploy the patched helm-controller
:
kustomize build config/manager | k apply -f -
Apply the sample HelmRepo
and HelmRelease
:
kubectl apply -f helmrepo.yaml
kubectl apply -f helmrelease.yaml
Check for the Helm release secret:
kubectl get secrets -n hello -l 'owner=helm'
NAME TYPE DATA AGE
sh.helm.release.v1.hello.v1 helm.sh/release.v1 1 29s
Test case 2: Configmaps
The Helm release information should still be stored in Configmaps.
Deployment patch in config/manager/kustomization.yaml
(flag):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
newName: test-helm-controller
newTag: latest
patches:
- patch: |-
- op: add
path: /spec/template/spec/containers/0/args/-
value: --helm-storage-driver=configmap
target:
kind: Deployment
OR (env var):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
newName: test-helm-controller
newTag: latest
patches:
- patch: |-
- op: add
path: /spec/template/spec/containers/0/env/-
value: {"name": "HELM_DRIVER", "value": "configmap"}
target:
kind: Deployment
Deploy the patched helm-controller
:
kustomize build config/manager | k apply -f -
Apply the sample HelmRepo
and HelmRelease
:
kubectl apply -f helmrepo.yaml
kubectl apply -f helmrelease.yaml
Check for the Helm release configmap:
kubectl get configmaps -n hello -l 'owner=helm'
NAME DATA AGE
sh.helm.release.v1.hello.v1 1 25s
Test case 3: SQL storage
The Helm release information should still be stored in Configmaps. For this test, I used a PostgreSQL DB hosted on an Azure server.
Deployment patch in config/manager/kustomization.yaml
(flag):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
newName: test-helm-controller
newTag: latest
patches:
- patch: |-
- op: add
path: /spec/template/spec/containers/0/args/-
value: --helm-storage-driver=sql
- op: add
path: /spec/template/spec/containers/0/env/-
value: {"name": "HELM_DRIVER_SQL_CONNECTION_STRING", "value": "<your-connection-string>"}
target:
kind: Deployment
OR (env var):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
newName: test-helm-controller
newTag: latest
patches:
- patch: |-
- op: add
path: /spec/template/spec/containers/0/env/-
value: {"name": "HELM_DRIVER", "value": "sql"}
- op: add
path: /spec/template/spec/containers/0/env/-
value: {"name": "HELM_DRIVER_SQL_CONNECTION_STRING", "value": "<your-connection-string>"}
target:
kind: Deployment
Deploy the patched helm-controller
:
kustomize build config/manager | k apply -f -
Apply the sample HelmRepo
and HelmRelease
:
kubectl apply -f helmrepo.yaml
kubectl apply -f helmrelease.yaml
Connect to the DB (I used psql
) and check the release_v1
table. You will see that there's a new row:
helm=> SELECT count(*) FROM releases_v1 ;
count
-------
1
(1 row)
The content can be checked by simply running the following SQL query:
SELECT * FROM releases_v1 ;
Unit and regression testing
The runner.go
file has no test file, and the functions in helmrelease_controller.go
that involve the new variable do not have any coverage, so it wasn't clear to me how to proceed. The changes I made are backwards compatible and should not cause any issue AFAIK, but I'm open to add anything else if needed. So feel free to leave any feedback/suggestion :)