Pulumi keep updating chart with no change
What happened?
I deployed a helm chart for aws alb controller on my eks cluster.
Pulumi succeeded at creating resources and deploying the chart.
Then I ran pulumi up again without any change in my pulumi code.
However, Pulumi detected changes in resources.
+- │ ├─ kubernetes:core/v1:Secret default/aws-load-balancer-tls replace
~ │ ├─ kubernetes:admissionregistration.k8s.io/v1:MutatingWebhookConfiguration aws-load-balancer-webhook update
~ │ └─ kubernetes:admissionregistration.k8s.io/v1:ValidatingWebhookConfiguration aws-load-balancer-webhook update
And this (maybe) redundant update also got succeded. But every time I ran pulumi up, it kept reporting the same changes to me and updating it.
Steps to reproduce
I made a small custom resource AlbIngressController.ts. You can deploy to your aws eks cluster with this code
new AlbIngressController('alb-ingress-controller', { cluster });
In my case, cluster was a fargate cluster.
And run pulumi up and see it succeed.
The run pulumi up and see the changes Pulumi reports.
Expected Behavior
At the second time I ran pulumi up without any change in my code, Pulumi should say it had nothing to update.
Actual Behavior
Pulumi keeps reporting the same change and updating the resources.
Versions used
CLI
Version 3.37.2
Go Version go1.18.4
Go Compiler gc
Plugins
NAME VERSION
aws 5.10.0
cloudflare 4.9.0
docker 3.2.0
eks 0.41.2
kubernetes 3.20.3
nodejs unknown
random 4.7.0
Host
OS darwin
Version 11.6
Arch x86_64
This project is written in nodejs: executable='/usr/local/bin/node' version='v16.13.2'
Additional context
The first time I deployed aws alb controller chart to the cluster, Pulumi said the deployment was successful but the pods in the cluster were not healthy state. It reported warning that said 'Back-off restarting failed container'.
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
One way to avoid this churn is to install cert manager and enable it for aws load balancer controller.
That said, I would interested to know if there are other ways to avoid this churn without cert manager.
Hi @bglgwyng - can you please send us a complete program of what you saw this behavior with, so we can reproduce this behavior? Thank you!
@guineveresaenger
https://github.com/bglgwyng/pulumi-k8s-report
I made a smaller one that can reproduce the same behavior.
use k8s.helm.v3.Release instead of k8s.helm.v3.Chart