Add chart validation jobs
- [x] helm lint
- [ ] deploy to k3d cluster
It looks like #122 also mentions linting, so linking that here as well.
For the k3d cluster, it does look like we're already spinning up a kind cluster to test here:
https://github.com/nextcloud/helm/blob/e6ad22e814f8f5a32707a182e199fdb55770b88b/.github/workflows/lint-test.yaml#L42-L56
It looks like we're using ct lint
Run 'helm lint', version checking, YAML schema validation on 'Chart.yaml', YAML linting on 'Chart.yaml' and 'values.yaml', and maintainer validation on - changed charts (default) - specific charts (--charts) - all charts (--all)
which should already be running helm lint, but just for testing, I went ahead and did the following.
First, a minor update to break the chart in my locally cloned nextcloud/helm repo dir:
$ git diff
diff --git a/charts/nextcloud/templates/deployment.yaml b/charts/nextcloud/templates/deployment.yaml
index 2140a97..7890f86 100644
--- a/charts/nextcloud/templates/deployment.yaml
+++ b/charts/nextcloud/templates/deployment.yaml
@@ -2,7 +2,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "nextcloud.fullname" . }}
- labels:
+ labels
app.kubernetes.io/name: {{ include "nextcloud.name" . }}
helm.sh/chart: {{ include "nextcloud.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
Then I ran ct lint here:
$ ct lint --config ct.yaml
Linting charts...
------------------------------------------------------------------------------------------------------------------------
Charts to be processed:
------------------------------------------------------------------------------------------------------------------------
nextcloud => (version: "3.4.3", path: "charts/nextcloud")
------------------------------------------------------------------------------------------------------------------------
"bitnami" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "metallb" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 3 charts
Downloading postgresql from repo https://charts.bitnami.com/bitnami
Downloading mariadb from repo https://charts.bitnami.com/bitnami
Downloading redis from repo https://charts.bitnami.com/bitnami
Deleting outdated charts
Linting chart "nextcloud => (version: \"3.4.3\", path: \"charts/nextcloud\")"
Checking chart "nextcloud => (version: \"3.4.3\", path: \"charts/nextcloud\")" for a version bump...
Old chart version: 3.4.3
New chart version: 3.4.3
------------------------------------------------------------------------------------------------------------------------
✖︎ nextcloud => (version: "3.4.3", path: "charts/nextcloud") > chart version not ok. Needs a version bump!
------------------------------------------------------------------------------------------------------------------------
Error: failed linting charts: failed processing charts
failed linting charts: failed processing charts
So it looks like it does catch the missing : typo I faked above. A quick check of the github actions workflow runs looks like it's functioning as expected here: https://github.com/nextcloud/helm/actions/runs/4021502671/jobs/6912755015
I had used helm lint before, but wasn't aware of ct till now. If anyone else is new as well, you can learn more here:
https://github.com/helm/chart-testing
So I guess helm lint is taken care of and so is a test helm install, but would you still like to implement the k3d cluster? Kind is great, but can sometimes produce different results from k3s depending on configuration, so I can understand testing on mulitple distributions, but wanted to more cleanly lay out the current state of things.
Oh, just realized I misread #122, we can do a helm template nextcloud charts > rendered.yaml, and then do a lint on those charts, but I am not sure if that's necessary as it should fail before being templated, and if not, should fail on the ct install job step we run. Open to discussion and chatting further, because I could be overlooking something :)
For the k3d cluster, it does look like we're already spinning up a kind cluster to test here:
I'm coming back in to say I think we should add the k3d cluster, but maybe also a couple of other distros, like perhaps also keep kind in there, but with an ingress, and add some basic tests? I want a way to validate not just that the helm chart installs, but that we can get to at least the login page and get a 200 back?
Recently found that mastodon is doing this with k3s (not k3d, but still interesting): https://github.com/mastodon/chart/blob/main/.github/workflows/test-chart.yml
Uses this github action if we're interested: https://github.com/jupyterhub/action-k3s-helm
Looks good, can you create a PR?
sure, no prob, which do we want to do? a) add k3s testing in addition to KinD b) Replace KinD test with k3s testing
More tests is better, so please don't remove KinD testing :)
Can do! :)
sure, no prob, which do we want to do? a) add k3s testing in addition to KinD b) Replace KinD test with k3s testing
do we want to add k3s to the matrix here?
https://github.com/nextcloud/helm/blob/bf6cc4a9df0b3bffd3915dc940ddbec71976429e/.github/workflows/lint-test.yaml#L67-L83
So, this would become something like:
matrix:
# each item in this list is a job with an isolated test VM
k8s_distros: ['k3s', 'kind']
test_cases:
# test the plain helm chart with nothing changed
- name: 'Default - no custom values'
# test the helm chart with postgresql subchart enabled
- name: PostgreSQL Enabled
helm_args: '--helm-extra-set-args "--set=postgresql.enabled=true --set=postgresql.global.postgresql.auth.password=testing123456 --set=internalDatabase.enabled=false --set=externalDatabase.enabled=True --set=externalDatabase.type=postgresql --set=externalDatabase.password=testing12345"'
# test the helm chart with nginx container enabled
- name: Nginx Enabled
helm_args: '--helm-extra-set-args "--set=image.flavor=fpm --set=nginx.enabled=true"'
# test the helm chart with horizontal pod autoscaling enabled
- name: Horizontal Pod Autoscaling Enabled
helm_args: '--helm-extra-set-args "--set=hpa.enabled=true --set=hpa.minPods=2 --set=hpa.maxPods=3 --set=hpa.targetCPUUtilizationPercentage=75"'
And then this would also change: https://github.com/nextcloud/helm/blob/bf6cc4a9df0b3bffd3915dc940ddbec71976429e/.github/workflows/lint-test.yaml#L111-L113
to something like this:
- name: Create KinD cluster
uses: helm/[email protected]
if: steps.list-changed.outputs.changed == 'true' && matrix.k8s_distro == 'kind'
- name: Create K3s cluster
uses: debianmaster/actions-k3s@master
if: steps.list-changed.outputs.changed == 'true' && matrix.k8s_distro == 'k3s'
id: k3s
with:
version: 'latest'
Happy to hear feedback on this :)
Yes that would be awesome, I hope it is easy to implement it with the matrix we have now :+1: