helm-charts
helm-charts copied to clipboard
Prometheus Operator CRDs
Hello. I have victoria-metrics-k8s-stack deployed into my cluster. I'm trying to install some other helm charts, like nginx with the serviceMonitor option enabled, which creates ServiceMonitor CRD.
And I'm getting this error:
Error: UPGRADE FAILED: resource mapping not found for name: "nginx-ingress-nginx-controller" namespace: "" from "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
ensure CRDs are installed first
Why doesn't victoria-metrics-k8s-stack include Prometheus Operator CRDs like ServiceMonitor?
@k1rk WDYT, Should we embed it into k8s-stack?
I'm curious about this too. I'm running vm k8s stack instead of prometheus, tried to add openebs-monitoring and ran into the same issue. With vm offering feature parity in metrics collection and querying plus a bunch of other stuff, one could imagine wanting to maintain operator feature parity by supporting the monitoring.coreos.com CRDs, or at least some of them such as PodMonitor, PrometheusRule , ServiceMonitor, and Probe
But I have no idea how much work this would be or if operator-level compatibility is something the VM team wants to chase.
As a workaround, I'd recommend deploying kube-prometheus-stack-crds from https://wiremind.github.io/wiremind-helm-charts. Works pretty well for us :)
Sure. I can add the CRDs to get the other monitoring chart to load. But I still need to either translate it's CRDs into VM operator CRDs or run a kube-prometheus-stack and configure it to forward data to vm and manually copy any dashboards it creates from the kube-prometheus-stack into my victoria-metrics-k8s-stack grafana.
The VM operator can do the translation for you. Just put
operator:
disable_prometheus_converter: false
enable_converter_ownership: true
into your values.yaml. (No clue why the converter is disabled by default, tbh)
Oh, nice! I hadn't waded through the vm stack chart enough to spot this. Thank you!
I was under the impression that victoriametrics-operator installs those CRDs(as well as vm-k8s-stack), but apparently, that's not yet true.
Please implement some workaround for this issue, I see that operator already has all those CRDs in one file in the operator/hack/prom_crd/prometheus_operator_crd.yaml, so just needs to install them if enabled.
I have a very similar issue with installing Helm chart applications that do use ServiceMonitor via ArgoCD. Those applications hang in the installation process, waiting until SM CRD becomes available. There are some workarounds on how to make ArgoCD ignore missing CRDs for initial processing, but still without CRDs applications are shown as OutOfSync.
For me, things got accidentally working, but only cause Loki's Helm chart brings grafana-agent with it for self-monitoring and that, in turn, installs ServieMonitor and PodScrape CRDs by itself.
+1
I thought that vm has those definitions, maybe under variable in case on any conflicts if you already has definitions in the you cluster. But for now I need to install CRD from prometheus-operator and then I need to install vm-stack. And I should monitor updates for both charts.
From other hand it is super logical because with very high chance you already have an installed prometheus CRDs, because it is a standard. But for other hand I want to bootstrap clean cluster, install vm-stack and start working. But as many other says earlier many application requires prometheus-operator CRDs.
I've an issue to install victoria-metrics-k8s-stack using Terraform, because of the missed CRDs.
Should we first install victoria-metrics-operator?
Values, like following does not help
victoria-metrics-operator:
createCRD: true
operator:
disable_prometheus_converter: false
Planning failed. Terraform encountered an error while generating this plan.
╷ │ Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "vm-stack-victoria-metrics-k8s-stack" namespace: "monitoring" from "": no matches for kind "VMAgent" in version "operator.victoriametrics.com/v1beta1" │ ensure CRDs are installed first, resource mapping not found for name: "vm-stack-victoria-metrics-k8s-stack" namespace: "monitoring" from "": no matches for kind "VMAlert" in version "operator.victoriametrics.com/v1beta1" │ ensure CRDs are installed first, resource mapping not found for name: "vm-stack-victoria-metrics-k8s-stack" namespace: "monitoring" from "": no matches for kind "VMAlertmanager" in version "operator.victoriametrics.com/v1beta1" │ ensure CRDs are installed first, resource mapping not found for name: "vm-stack-victoria-metrics-k8s-stack-cadvisor" namespace: "monitoring" from "": no matches for kind "VMNodeScrape" in version "operator.victoriametrics.com/v1beta1"
@air3ijai this is a different issue, unrelated to this one (which is about the prometheus operator CRDs), but if you want to deploy something through terraform's kubernetes provider, you have to make sure the ordering is right, because terraform can't check a resource state unless the CRD is present in the cluster, which makes it really hard and cumbersome to use. Just use the helm provider instead.
Yes, I'm using a helm_release
resource "helm_release" "vm-stack" {
name = "vm-stack"
repository = "https://victoriametrics.github.io/helm-charts"
chart = "victoria-metrics-k8s-stack"
version = "0.19.4"
namespace = "monitoring"
timeout = 600
values = [file("${path.module}/values/vm-stack.yaml")]
}
Moved here - #924.