terraform-provider-helm
terraform-provider-helm copied to clipboard
Job resources are not getting created with helm_release
Terraform, Provider, Kubernetes and Helm Versions
Terraform version: v1.2.0
Provider version: 2.9.0
Kubernetes version: 1.23
Affected Resource(s)
- helm_release
Terraform Configuration Files
// Temporal Chart: https://github.com/temporalio/helm-charts
resource "helm_release" "chart" {
name = "temporaltest"
chart = "local_Temporal_chart_path"
create_namespace = true
values = []
dependency_update = true
timeout = 15 * 60
}
Steps to Reproduce
-
git clone https://github.com/temporalio/helm-charts
- Adjust local chart path to the cloned repo path
-
terraform apply
Expected Behavior
- All resources in the chart should be created similarly like when I use helm install:
> helm install temporaltest . --timeout 15m
Actual Behavior
- Jobs resources are not getting created when using
helm_release
unlike when we use helm install - We tried to use
helm_template
to getmanifest
output for the chart then deploy it usingkubectl apply
, it worked as expected
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Hi @mohamedazouz does this actually successfully install anything from the chart?
@sheneska All resources are successfully installed except 2 jobs which is responsible for creating the DB schema, which is necessary for pods init containers to finish, so all pods are in init state
Hi @mohamedazouz, Thanks for sharing. Do you get the correct results when applying the charts with helm instead of terraform?
Hi @sheneska, Yes the whole chart resources were created successfully using helm install
Hi, @mohamedazouz were you able to get any response on this. Im having the same issue. Terraform v1.4.6 Helm v3.11.1+6.el8 Hashicorp/helm 2.10.1 Kubernetes 1.23
@sheneska Just to add some information, in my case as well, installing via helm works perfectly.
I just managed to make it work somehow. Im not sure what binds it to the root cause but, I hardcoded name and labels for job manifest, and it worked.
@ramonpenteado I didn't get any response on this and we ended up by using helm_template
to generate manifest file then apply using kubectl_manifest
data "helm_template" "temporal" {
name = "temporaltest"
chart = "local_Temporal_chart_path"
namespace = "namespace"
create_namespace = true
values = []
}
data "kubectl_file_documents" "resources" {
content = data.helm_template.temporal.manifest
}
resource "kubectl_manifest" "test" {
for_each = data.kubectl_file_documents.resources.manifests
yaml_body = each.value
override_namespace = "namespace"
}
This works, but this is actually a workaround by direct kubectl apply
instead of using helm_release
.
@ramonpenteado You mentioned that you make it work by modifying some values, can you paste your changes to compare it with mine?
Sure, @mohamedazouz .
As I said, Im not sure why it would impact on job deployment, but I was trying to debug during terraform deployment and a warning came thru regarding job execution:
Cannot use map values here
The only place that that happened was inside Job Manifest at name and labels fields.
I hardcoded them, and it worked.
I tried destroying everything (And in my case thats a lot because there are 4 namespaces with 8 different deployments), and recreate it again. It worked with hardcoded values.
Then I re-entered map reference values to helpers, and it didn't deployed again.
Could it be related to the helper function being used + Job deployment?
Im not sure. This seems very unlikely and totally unrelated.
I encountered this while installing the official Airflow Helm Chart. I think the cause is the default wait=true
in helm_release
.
Here's what I think is happening:
- If the jobs are set to run
post-install
, they will not launch until the release has been marked successful. - If
wait=true
in thehelm_release
, terraform does not mark the release as successful until all resources are in a ready state.
This effectively creates a deadlock when resources depend on the result of the running jobs. In my case everything works as expected when I set wait=false
.
@ForeverWintr Actually, @pmorillas-apheris mentioned this workaround: https://github.com/temporalio/helm-charts/issues/404#issuecomment-1665099524, I thought might not be relevant actually, never tried it, I will try it and will update you here.
Yes, that comment is what sent me down this path. I initially dismissed it for the same reason you did.
I had an issue similar to @ramonpenteado in that when I deleted the job, the helm chart would install it but subsequent runs wouldn't update it. The fix that I found was to set spec.ttlSecondsAfterFinished
in the job so that the completed jobs were cleaned up which allowed helm to install the job the next time that it ran since it didn't already exist.