helmfile
helmfile copied to clipboard
Needs evaluation happens before templating is done
As far as I can see after changes from commit were merged our pipeline fail with the next error:
err: release(s) "dev-power/dev-vladimir-kuznichenkov/site-visitor-config-ingress", "dev-power/dev-vladimir-kuznichenkov/overseer-ingress" depend(s) on an undefined release "dev-power/dev-vladimir-kuznichenkov/{{ .Release.Name | trimSuffix \"-ingress\" }}". Perhaps you made a typo in "needs" or forgot defining a release named "{{ .Release.Name | trimSuffix \"-ingress\" }}" with appropriate "namespace" and "kubeContext"?
We are using a template for release:
templates:
default-release: &default-release
namespace: 'dev-{{ requiredEnv "NS_NAME" }}'
app-release: &app-release
<<: *default-release
missingFileHandler: Debug
chart: ./application-charts/{{`{{ .Release.Name }}`}}
labels:
stage: application
values:
- "./values/global.yaml.gotmpl"
- "./values/{{`{{ .Release.Name }}`}}.yaml"
- "./values/{{`{{ .Release.Name }}`}}.yaml.gotmpl"
app-ingress-release: &app-ingress-release
<<: *app-release
chart: ../kubernetes/charts/{{`{{ .Release.Name | trimSuffix "-ingress" }}`}}
needs:
- dev-{{ requiredEnv "NS_NAME" }}/{{`{{ .Release.Name | trimSuffix "-ingress" }}`}}
and release itself:
releases:
- name: operator-app-launcher
<<: *app-release
needs:
- dev-{{ requiredEnv "NS_NAME" }}/sm-domain
- dev-{{ requiredEnv "NS_NAME" }}/backend-web
- dev-{{ requiredEnv "NS_NAME" }}/operator-admin
- name: operator-app-launcher-ingress
<<: *app-ingress-release
It seems that function that does the check was executed before templated string from the template was replaced with the actual value.
I see the same error but I was able to reduce it down to a case where there are slashes (/
) in the kubeContext
(for example when using AWS EKS).
Here is a minimum reproduction:
helmDefaults:
kubeContext: arn:aws:eks:us-east-1:11111111:cluster/cluster-name # Note '/' in name
repositories:
- name: stable
url: https://charts.helm.sh/stable
releases:
- name: bar
namespace: namespace
chart: stable/metrics-server
- name: foo
namespace: namespace
chart: stable/metrics-server
needs:
- bar
# - namespace/bar
# - cluster-name/namespace/bar
# - arn:aws:eks:us-east-1:11111111:cluster/cluster-name/namespace/bar
None of the listed needs
actually work as long as there is a /
in the context.
$ helmfile template
Adding repo stable https://charts.helm.sh/stable
"stable" has been added to your repositories
in ./helmfile.yaml: release(s) "arn:aws:eks:us-east-1:11111111:cluster/cluster-name/namespace/foo" depend(s) on an undefined release "cluster-name/namespace/bar". Perhaps you made a typo in "needs" or forgot defining a release named "bar" with appropriate "namespace" and "kubeContext"?
Note that if I change the context to add an additional slash:
- kubeContext: arn:aws:eks:us-east-1:11111111:cluster/cluster-name
+ kubeContext: arn:aws:eks:us-east-1:11111111:cluster/clus/ter-name
This is the new error:
$ helmfile template
Adding repo stable https://charts.helm.sh/stable
"stable" has been added to your repositories
in ./helmfile.yaml: release(s) "arn:aws:eks:us-east-1:11111111:cluster/clus/ter-name/namespace/foo" depend(s) on an undefined release "ter-name/namespace/bar". Perhaps you made a typo in "needs" or forgot defining a release named "bar" with appropriate "namespace" and "kubeContext"?
See specifically undefined release "ter-name/namespace/bar".
Forgot to mention. Removing or replacing the slash means the templates will render:
- kubeContext: arn:aws:eks:us-east-1:11111111:cluster/cluster-name
+ kubeContext: arn:aws:eks:us-east-1:11111111:cluster------cluster-name
@nicholascapo Ah okay, yours seems to be a different issue that comes from the fact that helmfile doesn't support /
in the context name by design.
I doubt if it ever worked before. What I did in #2026 was about making your case explicitly fail. Before, it won't have failed, but that doesn't mean it worked. The dependency graph might have been silently ignored your releases wen't installed/upgraded in a desired order.
@kuzaxak Hey. Thanks for reporting.
I need to dig my memory, but I think I completely missed how the use of release template within needs
affects the DAG calculation.
Did the needs
ever affect the order of installation/upgrade in your example?
In other words, did it ever work for you before the commit you pointed to?
I doubt not.
Interesting, that is consistent with what we have observed, I don't think it was caused by your commits just made noisy :-)
I wasn't aware of the context name constraint, is that documented somewhere?
@nicholascapo No, it isn't documented yet.
I don't think it was caused by your commits just made noisy :-)
Let me confirm- so needs
did affected the install/upgrade order before, right?
@kuzaxak Hey. So I reread helmfile's implementation code and it turns out that it never worked.
The expansion of
{{` whatever `}}
in each release is implemented in this function
https://github.com/roboll/helmfile/blob/ae942c5288895c84c79171e5446773e4cb41c4ce/pkg/state/release.go#L10
and it doesn't support needs
entries.
So what happened to you is, such needs
never worked and it had been silently ignored.
The commit https://github.com/roboll/helmfile/commit/9efb7afb4771d60b1a819a495c3f0f725ab33c41 fixed the issue that it is silently ignored, so it fails with a loud error.
But that doesn't mean release templates in needs
entries are rendered.
A possible solution would be to use go template's {{ define }}
and {{ template }}
as a alternative to your mix of template
and release templates.
@nicholascapo Could you create another issue dedicated to additional support for /
within kubecontext name when used with needs
?
If you could attach a complete helmfile config for reproduction, it would help anyone to contribute.
@nicholascapo Could you create another issue dedicated to additional support for
/
within kubecontext name when used withneeds
?If you could attach a complete helmfile config for reproduction, it would help anyone to contribute.
Will do. Thank you for detailed information!
@kuzaxak I believe you can keep using this issue if you really want needs
to support release templates. Do you? 😃
@kuzaxak I believe you can keep using this issue if you really want
needs
to support release templates. Do you? 😃
I will try to understand how can we replace them with define
and template
if won't succeed I will provide the full example and will think about possible solutions.
@kuzaxak Thanks. FWIW, I was thinking about one like the below example. Note that I have not tested this specific example, and I may have misunderstood your goal. Hope it helps.
{{ define "default" }}
namespace: 'dev-{{ requiredEnv "NS_NAME" }}'
missingFileHandler: Debug
labels:
stage: application
values:
- "./values/global.yaml.gotmpl"
- "./values/{{ $releaseName }}.yaml"
- "./values/{{ $releaseName }}.yaml.gotmpl"
{{ end }}
{{ define "app" }}
{{ $releaseName := . }}
{{ template "default" . }}
chart: ./application-charts/{{ $releaseName }}
{{ end }}
{{ define "ingress" }}
{{ $releaseName := . }}
{{ template "default" . }}
chart: ../kubernetes/charts/{{ $releaseName | trimSuffix "-ingress" }}
needs:
- dev-{{ requiredEnv "NS_NAME" }}/{{ $releaseName | trimSuffix "-ingress" }}
{{ end }}
releases:
- {{ template "app" "app1" | nindent 2 }}
- {{ template "ingress" "app1-ingress" | nindent 2 }}
- {{ template "app" "app2" | nindent 2 }}
- {{ template "ingress" "app2-ingress" | nindent 2 }}
We also need needs
evaluation to happen after templating. Hopefully the following illustrates our use case.
helmfile.d/00.yaml
releases:
{{ $bar := "bar" }}
- name: {{ $bar }}
namespace: foo
helmfile.d/01.yaml
releases:
- name: baz
namespace: quux
needs:
- foo/bar
@yurrriq needs
evaluation does happen after templating. Even @kuzaxak's original issue turned out to be not working due to that helmfile doesn't support release templates in needs
entries. It does work with helmfile templates. Perhaps your issue is coming from somewhere else?
Interesting. Thanks, @mumoshu. I'll have to investigate further. I also want to solve #1900 whenever I get the chance, so I definitely have some digging to do :smiley: In the meantime I've commented out needs
in our state files, which isn't much of an issue for us, thankfully.
I'm also having some issues with needs, my error related to the kubecontext is a bit different, it doesn't show the content before the initial slash.
in ./helmfile.yaml: in .helmfiles[6]: in releases/istio/helmfile.yaml.gotmpl: release(s) "istio-system/istio-base" depend(s) on an undefined release "stage/default/namespace". Perhaps you made a typo in "needs" or forgot defining a release named "namespace" with appropriate "namespace" and "kubeContext"?
when my context is: arn:aws:eks:us-east-1:111111111000:cluster/stage
@Cayan Hey. Yours seems to be exactly the same as https://github.com/roboll/helmfile/issues/2048#issuecomment-1013513979, which is another issue unrelated to the one explained in this thread :)
Investigated a bit deeply and found interesting case.
If release has a needs dependency template with selector will always fail.
"/path/to/helmfile.yaml": `
releases:
- name: bar
chart: stable/mychart2
namespace: application
- name: foo
chart: stable/mychart1
needs:
- application/bar
`,
},
selectors: []string{"name=foo"},
Happening because pkg/state/state_run.go:174
filter out only one release, then pkg/app/app.go:1848
replace list of releases to filtered (only one) and then pkg/app/app.go:1860
will fail because it tries to find dependency one more time.
@kuzaxak Thanks for your help 🙏 Let me confirm- does it fail even with --include-needs
?
@kuzaxak Thanks for your help 🙏 Let me confirm- does it fail even with
--include-needs
?
Will check and prepare PR. We are working on adding go getter to the values and looks like we need to fix both.
@kuzaxak Thanks! Really looking forward to seeing your PRs. Oh, and let me note that the helmfile repo has been moved to a new location. You can see https://github.com/roboll/helmfile/issues/1824 for more information about that.
Hi @mumoshu would you be so kind to review @kuzaxak's PR? If that solves the issue of needs not working in second phase rendering, than we would be very happy! We just update our code stack to use them, as they seemed to work, but probably only because we were in dev mode and deps were installed beforehand. Running a fresh install with needs does not work at all, since we have most definitions only ready after phase 1.
@Morriz Hey! Which PR, to be sure? I think this turned out to be issues in some hemlfile sub-commands that they didn't correctly support --include-needs
and --skip-needs
. I don't know how "phases" you mentioned are implemented in your case, but in case you need to force helmfile to skip certain dependencies, you need to instruct helmfile to do so by specifying --skip-needs
. Otherwise Helmfile can't distinguish if you had a wrong dep in helmfile.yaml vs you do have defined correct deps in helmfile.yaml but you need to force skip it for a reason(like you need to deploy only a subset of releases by labels, but deps.
And I thought we already fixed that in https://github.com/helmfile/helmfile/pull/78 as a part of v0.145.0 https://github.com/helmfile/helmfile/releases/tag/v0.145.0
Thanks for the quick reply. I hoped to find that v0.145.0
release would work for us, but it seems not. At least not for how we need it. We have this helmfile and expect the releases to be installed in order, so each will wait for its deps to be installed. The gatekeeper
release installs crds, and gatekeeper-artifacts
creates crds based on those, so it is imperative that each install finishes before the next:
bases:
- snippets/defaults.yaml
---
bases:
- snippets/env.gotmpl
---
bases:
- snippets/derived.gotmpl
---
{{ readFile "snippets/templates.gotmpl" }}
{{- $v := .Values }}
{{- $a := $v.apps }}
releases:
- name: gatekeeper
installed: {{ $a | get "gatekeeper.enabled" }}
namespace: gatekeeper-system
chart: ../charts/gatekeeper
disableValidationOnInstall: true
labels:
pkg: gatekeeper
values: ...
- name: gatekeeper-artifacts
installed: {{ $a | get "gatekeeper.enabled" }}
needs: [gatekeeper]
namespace: gatekeeper-system
chart: ../charts/gatekeeper-artifacts
labels:
pkg: gatekeeper
values: ...
- name: gatekeeper-constraints
installed: {{ $a | get "gatekeeper.enabled" }}
needs: [gatekeeper-artifacts]
namespace: gatekeeper-system
chart: ../charts/gatekeeper-constraints
labels:
pkg: gatekeeper
values: ...
Jobs exist in each release that wait for the existence of those, but we never see anything being installed. This makes me wonder if it tries to aggregate all manifests at runtime, which would be strange, as that use case is already solved by helms dependencies
resolution.
I thought needs
was born due to the desire to avoid Helm umbreall chart / helm chart dependencies or some user demand to introduce kustomize configs and raw k8s manifests to the depepdench graph :)
Well, anyway, it doesn't alter the meaning of prepare
hook. prepare
hooks are called while Helmfile validating and preparing all the releases' charts even before calculating the dependency graph. Assuming you're trying to something similar to terraform, you might better use presync
although I have not tried that with your use-case in mind. I also remember someone is currently trying to contribute preappy
which might be useful to you as it's more versatile than presync
.
I will strip everything related to post/pre as that is not what I wanted to illustrate (I just copy/pasted the code from otomi right now). I just hope you can shed clarification on wether the installs are made one after the other.
How can we make sure that chart A is installed (until the last helm post-install hook has finished) before chart B ?
I don't think it is possible right now. We faced the same issue and I didn't had time to investigate it or fix it.
We are using installedTemplate
to control installation order via sequence of apply jobs.
until the last helm post-install hook has finished
I thought it was a helm limitation that helm can't wait on post-install to finish