kube-prometheus
kube-prometheus copied to clipboard
Setup Error for custom-metrics.libsonnet
Discussed in https://github.com/prometheus-operator/kube-prometheus/discussions/1585
Originally posted by jnccneto January 17, 2022 Hello
I have setup the bundle following the repo instructions: https://github.com/prometheus-operator/kube-prometheus
Standart is ok, and I have CPU/MEMORY POD metrics. Now I need custom metrics but I get an error when building with jsonet and enabled custom-metrics.libsonnet. Any ideas on how to fix it?
error output:
RUNTIME ERROR: max stack frames exceeded.
vendor/kube-prometheus/addons/custom-metrics.libsonnet:7:18-54 object
my jsonet file:
`local kp = (import 'kube-prometheus/main.libsonnet') + // Uncomment the following imports to enable its patches // (import 'kube-prometheus/addons/anti-affinity.libsonnet') + // (import 'kube-prometheus/addons/managed-cluster.libsonnet') + (import 'kube-prometheus/addons/node-ports.libsonnet') + // (import 'kube-prometheus/addons/static-etcd.libsonnet') + (import 'kube-prometheus/addons/custom-metrics.libsonnet') + // (import 'kube-prometheus/addons/external-metrics.libsonnet') + { values+:: { common+: { namespace: 'monitoring', }, }, };
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } + { ['setup/prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator)) } + // serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready { 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } + { 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } + { 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } + { ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + { ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } + { ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } + { ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + { ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) } { ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + { ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + { ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }`
This issue has been automatically marked as stale because it has not had any activity in the last 60 days. Thank you for your contributions.
I have the same problem!
I ended up using the Helm chart installation. Works better
I got the same issue (as discussed in #1585 ), and the problem was in the custom-metrics.libsonnet. The offending lines are at
{
values+:: {
prometheusAdapter+: {
namespace: $.values.prometheusAdapter.namespace,
I think (Not really sure) that this is looping back to including the local variable initialization in example.jsonnet (and into the custom-metrics.libsonnet), which then loops back to this and then back to the original file. When I changed that value from $.values.prometheusAdapter.namespace
to my namespace, say, monitoring
, the build works properly and I was able to get the manifests.
I havent really worked out how to separate out the variables included in the example.jsonnet separately so that this wont loop itself again but thought Id share in case anyone faces this.
Did what @rajasaur proposed and it worked, but the prometheus-adapter pod keeps restarting...
This issue has been automatically marked as stale because it has not had any activity in the last 60 days. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had any activity in the last 60 days. Thank you for your contributions.
This issue was closed because it has not had any activity in the last 120 days. Please reopen if you feel this is still valid.
We're experiencing this issue on the release-0.11 branch (commit e3066575dc8b)