gateway-crds-helm v1.4.0 install fails – rendered release exceeds 1 MiB Secret size limit
Description: When installing gateway-crds-helm v1.4.0 we hit:
Error: create: failed to create: Secret “sh.helm.release.v1.eg-crds.v1” is invalid: data: Too long: must have at most 1048576 bytes
Helm stores the whole rendered release in a single Secret. Kubernetes hard-limits any Secret to 1 MiB (1 048 576 bytes). Because this chart bundles ~40 Gateway-API CRDs, the compressed release record exceeds that limit and the API server rejects the Secret.
Expected behaviour: the chart should install cleanly, or the docs should tell users to apply the CRDs manually (helm template | kubectl apply).
Repro steps:
Install the gateway-crds-helm v1.4.0 chart with Helm (any install method). The install fails with the Secret size error above.
Note: If there are privacy concerns, sanitize the data prior to sharing.
Environment:
- gateway-crds-helm chart: v1.4.0
- helm: v3.17.3
- kubernetes: v1.31.6-gke
Logs:
Error: create: failed to create: Secret “sh.helm.release.v1.eg-crds.v1” is invalid: data: Too long: must have at most 1048576 bytes
this is a known issue https://github.com/envoyproxy/gateway/pull/5616#issuecomment-2840245509
https://github.com/envoyproxy/gateway/issues/4001#issuecomment-2367361128
https://github.com/envoyproxy/gateway/issues/5940 will hopefully explain this limitation and add a recommendation that involves helm template instead
For those people that are using Terraform and want to install the CRDs separately from the EG implementation, this is a work around:
# Generate the CRD manifests
data "helm_template" "envoy_gateway_crds" {
chart = "oci://docker.io/envoyproxy/gateway-crds-helm"
name = "envoy-gateway-crds-helm"
version = local.envoy_gateway_version
values = [
yamlencode({
crds = {
gatewayAPI = {
enabled = true
}
envoyGateway = {
enabled = true
}
}
})
]
}
# The above 'helm template' can generate multi-document yaml files, we
# need to split those, because kubectl_manifest will only apply the first
# document.
data "kubectl_file_documents" "envoy_gateway_crds" {
for_each = data.helm_template.envoy_gateway_crds.manifests
content = each.value
}
resource "kubectl_manifest" "envoy_gateway_crds" {
# Form a flattened map of every yaml document (not file!) with a
# unique key so that we can apply them one by one.
for_each = merge([
for key, data in data.kubectl_file_documents.envoy_gateway_crds : {
for idx, doc in data.documents : "${key}-${idx}.yaml}" => doc
}
]...)
# Note: this is required, because otherwise `kubectl apply` would add
# a kubectl.kubernetes.io/last-applied-configuration annotation that
# exceeds the annotation max-size limit.
# Server-side-apply is considered good practice anyway.
server_side_apply = true
yaml_body = each.value
}
Note: I'm using this provider for the kubectl_manifest resource, but it probably works with the Hashicorp Kubernetes one as well 🤞 .
This issue has been automatically marked as stale because it has not had activity in the last 30 days.
@arkodg may I suggest splitting up the CRDs in multiple helm charts so that we stay below this size constraint of secrets/configmaps
There is no way to do templating with FluxCD! This is a major problem right now.
does FluxCD support server side apply?
I think so. But how does the normal helm chart work but not this one?
That's our bad to put too much things in one single crd, the bad thing is that we cannot delete them.
Sorry to ask a stupid question, why can't you delete them?
you cannot simply delete an API for compatibility.
you cannot simply delete an API for compatibility.
But you can though, just name it a breaking change
When using FluxCD v2 with HelmRelease to install CRDs (e.g., Gateway API CRDs and Envoy CRDs), the process fails because the generated Kubernetes Secret exceeds the 1MB size limit. This happens even when the CRDs are split into separate Helm charts or HelmRelease resources.
Interestingly, the default Traefik CRDs shipped with K3s — which are also distributed as a standalone, CRD-only Helm chart and include the Gateway API CRDs — can be successfully deployed via HelmRelease without hitting this limit.
I haven’t investigated the root cause in depth; this is based purely on observed behavior.