kubebuilder
kubebuilder copied to clipboard
(helm/v1-alpha) --force flag does not update values.yaml when kustomize manager spec has changed
What broke? What's expected?
When scaffolding the project with --plugins=helm.kubebuilder.io/v1-alpha any changes made to manager spec is not copied over to values.yaml when executing kubebuilder edit --plugins=helm/v1-alpha --force
Reproducing this issue
create a new project
$ kubebuilder init --domain my.domain --repo my.domain/guestbook --plugins=go/v4,helm.kubebuilder.io/v1-alpha
The scaffold generates a dist/chart
Next edit the spec of manager.yaml in config/manager/manager.yaml
for example increase the replica count to 2
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
labels:
control-plane: controller-manager
app.kubernetes.io/managed-by: kustomize
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 2
....
run kubebuilder edit --plugins=helm/v1-alpha --force as recommended in the documentation update helm chart wit latest changes
$ kubebuilder edit --plugins=helm/v1-alpha --force
INFO Generating Helm Chart to distribute project
INFO webhook manifests were not found at config/webhook/manifests.yaml
INFO Successfully copied config/rbac/leader_election_role.yaml to dist/chart/templates/rbac/leader_election_role.yaml
INFO Successfully copied config/rbac/leader_election_role_binding.yaml to dist/chart/templates/rbac/leader_election_role_binding.yaml
INFO Successfully copied config/rbac/metrics_auth_role.yaml to dist/chart/templates/rbac/metrics_auth_role.yaml
INFO Successfully copied config/rbac/metrics_auth_role_binding.yaml to dist/chart/templates/rbac/metrics_auth_role_binding.yaml
INFO Successfully copied config/rbac/metrics_reader_role.yaml to dist/chart/templates/rbac/metrics_reader_role.yaml
INFO Successfully copied config/rbac/role.yaml to dist/chart/templates/rbac/role.yaml
INFO Successfully copied config/rbac/role_binding.yaml to dist/chart/templates/rbac/role_binding.yaml
INFO Successfully copied config/rbac/service_account.yaml to dist/chart/templates/rbac/service_account.yaml
INFO Successfully copied config/network-policy/allow-metrics-traffic.yaml to dist/chart/templates/network-policy/allow-metrics-traffic.yaml
check values.yaml in dist/chart/values.yaml the replica value is still 1
KubeBuilder (CLI) Version
4.5.1
PROJECT version
3
Plugin versions
layout:
- go.kubebuilder.io/v4
- helm.kubebuilder.io/v1-alpha
Other versions
go version
go1.24.1 darwin/arm64
require (
....
k8s.io/apimachinery v0.32.1
k8s.io/client-go v0.32.1
sigs.k8s.io/controller-runtime v0.20.2
)
kubectl version
Client Version: v1.32.1
Kustomize Version: v5.5.0
Server Version: v1.32.0
Extra Labels
No response
It is also seems to generate static content for manager.yaml and values.yaml
Even when manager.yaml and values.yaml is deleted from the chart, executing kubebuilder edit --plugins=helm.kubebuilder.io/v1-alpha --force generates the default scaffolded values.yaml
also kubebuilder edit -h doesn't list --force flag as well
I laso have the same problem. When I tried updating control-plane in the helm chart, after regeneration it gets reset to default I can can not find the source of where is this coming from.
When scaffolding a project with --plugins=helm.kubebuilder.io/v1-alpha, any changes made to the manager deployment spec (via Kustomize) are not reflected in the Helm values.yaml after running:
kubebuilder edit --plugins=helm.kubebuilder.io/v1-alpha --force
This behavior occurs because, for the manager component, we use a fixed Helm template to ensure compatibility with Helm charts:
https://github.com/kubernetes-sigs/kubebuilder/blob/master/pkg/plugins/optional/helm/v1alpha/scaffolds/internal/templates/chart-templates/manager/manager.go
This means:
- Any manual updates in Kustomize (e.g., resource limits, env vars) are not carried over to the Helm chart.
- The plugin does not currently sync values from the Kustomize overlays into Helm template structures.
💡 Possible Directions
- One potential improvement could be to read from the Kustomize manager deployment and translate relevant fields into Helm-compatible
values.yamlentries and templates. - Alternatively, we could document this limitation clearly to avoid confusion.
If anyone is interested in improving this behavior or has ideas on syncing Kustomize and Helm setups, feel free to open a PR or start a discussion!
/assign
Hi @sarthaksarthak9 @abhijith-darshan @FishyFishPat
See: https://github.com/kubernetes-sigs/kubebuilder/issues/4833 ^ this proposal could address this one.
Thanks @camilamacedo86
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Fixed with #5058 Closing it