tilt-extensions
tilt-extensions copied to clipboard
helm_remote: Error applying large CRD
I created this issue on the tilt-dev/tilt repo here as well
Expected Behavior
When applying the kube-prometheus-stack helm chart I expect all resources to be created
Current Behavior
The deployment fails because the CRD's cannot be created and this prevents the pods from being created.
The log output for kube-prometheus-stack-crds-install is:
Warning: resource customresourcedefinitions/prometheuses.monitoring.coreos.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com configured
The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
It looks like the code tries to apply each CRD when perhaps a replace would be better. I think helm manages the error return when applying CRD's but this is missing from helm_remote
local_resource(name+'-install', cmd='kubectl apply -f %s' % " -f ".join(files), deps=files)
Steps to Reproduce
Create the following files
values.yaml
prometheus:
enabled: true
prometheusSpec:
resources:
limits:
cpu: 500m
memory: 2Gi
requests:
cpu: 20m
memory: 500Mi
alertmanager:
enabled: false
grafana:
enabled: true
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
kubeControllerManager:
enabled: false
kubeScheduler:
enabled: false
Tiltfile
helm_remote(
"kube-prometheus-stack",
repo_name="prometheus-community",
repo_url="https://prometheus-community.github.io/helm-charts",
values="values.yaml",
)
Run tilt up
Context
tilt doctor Output
Tilt: v0.30.1, built 2022-05-20
System: linux-amd64
---
Docker
- Host: unix:///var/run/docker.sock
- Server Version: 20.10.14
- API Version: 1.41
- Builder: 2
- Compose Version: v2.6.0
---
Kubernetes
- Env: kind
- Context: kind-kind
- Cluster Name: kind-kind
- Namespace: default
- Container Runtime: containerd
- Version: v1.24.0
- Cluster Local Registry: &RegistryHosting{Host:localhost:37573,HostFromClusterNetwork:ctlptl-registry:5000,HostFromContainerRuntime:,Help:https://github.com/tilt-dev/ctlptl,SingleName:,}
Have you considered using helm_resource? https://github.com/tilt-dev/tilt-extensions/tree/master/helm_resource
helm_resource uses helm as the deploy manager and can be a better fit in a lot of cases.
Hey @nicks I am trying out your suggestions, with a different chart that also has dependancy on prometheuses.monitoring.coreos.com. This issue has been widely discussed https://github.com/prometheus-community/helm-charts/issues/1500 .
So basically I want to see if your Tilt extension could help us solve this. My Tilt file:
load('ext://helm_resource', 'helm_resource', 'helm_repo')
helm_repo('apache', 'https://pulsar.apache.org/charts', resource_name='helm-repo-pulsar')
helm_resource('pulsar', 'apache/pulsar', resource_deps=['helm-repo-pulsar'], flags=['-f ./values.yaml'])
My values:
# deployed with emptyDir
volumes:
persistence: false
# disabled AntiAffinity
affinity:
anti_affinity: false
# disable auto recovery
components:
autorecovery: false
functions: false
proxy: false
pulsar_manager: false
toolset: false
zookeeper:
podMonitor:
enabled: false
replicaCount: 1
resources:
requests:
memory: 128Mi
cpu: 0.1
# limits:
# memory: 128Mi
configData:
PULSAR_MEM: >
-Xms32m -Xmx64m
-XX:MaxDirectMemorySize=128m
bookkeeper:
podMonitor:
enabled: false
replicaCount: 1
resources:
requests:
memory: 256Mi
cpu: 0.2
# limits:
# memory: 256Mi
configData:
# we use `bin/pulsar` for starting bookie daemons
PULSAR_MEM: >
-Xms64m -Xmx128m -XX:MaxDirectMemorySize=128m
broker:
podMonitor:
enabled: false
replicaCount: 1
resources:
requests:
memory: 256Mi
cpu: 0.2
# limits:
# memory: 512Mi
configData:
# Enable `autoSkipNonRecoverableData` since bookkeeper is running
# without persistence
autoSkipNonRecoverableData: "true"
# storage settings
managedLedgerDefaultEnsembleSize: "1"
managedLedgerDefaultWriteQuorum: "1"
managedLedgerDefaultAckQuorum: "1"
PULSAR_MEM: >
-Xms64m
-Xmx128m
-XX:MaxDirectMemorySize=128m
proxy:
podMonitor:
enabled: false
replicaCount: 1
resources:
requests:
memory: 64Mi
cpu: 0.2
configData:
PULSAR_MEM: >
-Xms32m -Xmx32m -XX:MaxDirectMemorySize=32m
toolset:
podMonitor:
enabled: false
useProxy: false
replicaCount: 1
resources:
requests:
memory: 256Mi
cpu: 0.1
configData:
PULSAR_MEM: >
-Xms64M
-Xmx128M
-XX:MaxDirectMemorySize=128M
pulsar_manager:
podMonitor:
enabled: false
replicaCount: 1
resources:
requests:
memory: 256Mi
cpu: 0.1
admin:
user: admin
password: password
kube-prometheus-stack:
serviceMonitor:
enabled: false
enabled: false
prometheusOperator:
enabled: false
grafana:
enabled: false
alertmanager:
enabled: false
prometheus:
enabled: false
autorecovery:
# Disable pod monitor since we're disabling CRD installation
podMonitor:
enabled: false
What the end result of this should be, is basically complete ignore of the CRD installation (since they are defined as a condition on kube-prometheus-stack.enabled == true, you could see this here)
For some reason however the helm upgrade command is not very happy.
#OUTPUT on -f ./values.yaml:
Running cmd: python3 "/Users/ivan/Library/Application Support/tilt-dev/tilt_modules/github.com/tilt-dev/tilt-extensions/helm_resource/helm-apply-helper.py" "-f ./values.yaml"
Running cmd: ['helm', 'upgrade', '--install', '-f ./values.yaml', 'pulsar', 'apache/pulsar']
Release "pulsar" does not exist. Installing it now.
Error: open ./values.yaml: no such file or directory
Traceback (most recent call last):
File "/Users/ivan/Library/Application Support/tilt-dev/tilt_modules/github.com/tilt-dev/tilt-extensions/helm_resource/helm-apply-helper.py", line 66, in <module>
subprocess.check_call(install_cmd, stdout=sys.stderr)
File "/Users/ivan/.pyenv/versions/3.10.5/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['helm', 'upgrade', '--install', '-f ./values.yaml', 'pulsar', 'apache/pulsar']' returned non-zero exit status 1.
## IF i try URL:
Running cmd: python3 "/Users/ivan/Library/Application Support/tilt-dev/tilt_modules/github.com/tilt-dev/tilt-extensions/helm_resource/helm-apply-helper.py" "-f [https://raw.githubusercontent.com/apache/pulsar-helm-chart/master/examples/values-cs.yaml"](https://raw.githubusercontent.com/apache/pulsar-helm-chart/master/examples/values-cs.yaml%22)
Running cmd: ['helm', 'upgrade', '--install', '-f https://raw.githubusercontent.com/apache/pulsar-helm-chart/master/examples/values-cs.yaml', 'pulsar', 'apache/pulsar']
Release "pulsar" does not exist. Installing it now.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x10598ad84]
goroutine 1 [running]:
helm.sh/helm/v3/pkg/cli/values.readFile({0x14000051440, 0x5a}, {0x1400068d300, 0x2, 0x2})
helm.sh/helm/v3/pkg/cli/values/options.go:118 +0x74
helm.sh/helm/v3/pkg/cli/values.(*Options).MergeValues(0x140002c2f00, {0x1400068d300?, 0x2, 0x2})
helm.sh/helm/v3/pkg/cli/values/options.go:48 +0x308
main.runInstall({0x140001321c0?, 0x2, 0x4}, 0x1400031a000, 0x1400091fcf0?, {0x106162d80?, 0x1400000e018})
helm.sh/helm/v3/cmd/helm/install.go:198 +0x34c
main.newUpgradeCmd.func2(0x14000178a00?, {0x140001321c0?, 0x2, 0x4})
helm.sh/helm/v3/cmd/helm/upgrade.go:121 +0x340
github.com/spf13/cobra.(*Command).execute(0x14000178a00, {0x14000132180, 0x4, 0x4})
github.com/spf13/[email protected]/command.go:856 +0x4c4
github.com/spf13/cobra.(*Command).ExecuteC(0x1400079c280)
github.com/spf13/[email protected]/command.go:974 +0x354
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/[email protected]/command.go:902
main.main()
helm.sh/helm/v3/cmd/helm/helm.go:83 +0x278
Traceback (most recent call last):
File "/Users/ivan/Library/Application Support/tilt-dev/tilt_modules/github.com/tilt-dev/tilt-extensions/helm_resource/helm-apply-helper.py", line 66, in <module>
subprocess.check_call(install_cmd, stdout=sys.stderr)
File "/Users/ivan/.pyenv/versions/3.10.5/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['helm', 'upgrade', '--install', '-f https://raw.githubusercontent.com/apache/pulsar-helm-chart/master/examples/values-cs.yaml', 'pulsar', 'apache/pulsar']' returned non-zero exit status 2.
So in one case... the file does not exist because I assume it takes the directory of the python script as ./ and in the other URL is not valid, even though helm supports it?
Would appreciate some help here @nicks :) Thanks
I'm going to close this issue. As suggested by the comment above (https://github.com/tilt-dev/tilt-extensions/issues/407#issuecomment-1154393677), you should be able to address this by moving to helm_resource. I think it's unlikely that it's feasible to fix helm_remote for this use-case.
@ivan-penchev i'm not sure i see how your issue is related to this one. You should report the Helm crash to helm team. Usually you should set values like:
helm_resource('pulsar', 'apache/pulsar', resource_deps=['helm-repo-pulsar'], flags=['-f', './values.yaml'])
rather than passing an argument with a space in it.