No data shown in Grafana Dashboards
I noticed that I don't see any data in my Grafana Dashboards. I hoped for this problem to be fixed after updating to the latest version of the chart (3.0.0. -> 3.3.0) but it persists ever since. All dashboards are showing no data.
I checked Grafanas settings and see that a Prometheus datasource is configured (http://pulsar-kube-prometheus-sta-prometheus.default:9090). If I click on "Test" to test the connection, I receive "Succesfully queried the Prometheus API".
After that I opened the Prometheus UI and checked the configuration under http://prometheus-address:9090/config. In it I see a bunch of jobs related to Pulsar:
job_name: podMonitor/default/pulsar-zookeeper/0
job_name: podMonitor/default/pulsar-proxy/0
job_name: podMonitor/default/pulsar-broker/0
job_name: podMonitor/default/pulsar-bookie/0
Looking up the Metrics Explorer I can't see any Pulsar related metrics.
I'll post here my values.yaml:
clusterName: cluster-a
namespace: pulsar
namespaceCreate: false
initialize: false
auth:
authentication:
enabled: true
jwt:
usingSecretKey: false
provider: jwt
authorization:
enabled: true
superUsers:
broker: broker-admin
client: admin
proxy: proxy-admin
broker:
configData:
proxyRoles: proxy-admin
certs:
internal_issuer:
enabled: true
type: selfsigning
components:
pulsar_manager: false
tls:
broker:
enabled: true
enabled: true
proxy:
enabled: true
zookeeper:
enabled: true
I installed the Pulsar helm chart into a namespace "pulsar" and noticed that all Grafana-Stack related components were installed into the "default" namespace.
Could this be an issue? I also enabled authentication/authorization, maybe the issue has to do with that?
This might be caused by the configured authentication. I guess metrics requires a token currently.
for the broker authenticateMetricsEndpoint defaults to false, so it might be something else.
I was having the same issue. Could resolve it by installing the helm chart into 'default' namespace rather than 'pulsar'
I was having the same issue. Could resolve it by installing the helm chart into 'default' namespace rather than 'pulsar'
thanks, @lerodu. Most likely this could be resolved by configuring kube-prometheus-stack.prometheus.prometheusSpec.podMonitorNamespaceSelector (docs) in values.yaml.
Something like this
kube-prometheus-stack:
prometheus:
prometheusSpec:
podMonitorNamespaceSelector:
matchLabels: {}
I was having the same issue. Could resolve it by installing the helm chart into 'default' namespace rather than 'pulsar'
thanks, @lerodu. Most likely this could be resolved by configuring
kube-prometheus-stack.prometheus.prometheusSpec.podMonitorNamespaceSelector(docs) in values.yaml.Something like this
kube-prometheus-stack: prometheus: prometheusSpec: podMonitorNamespaceSelector: matchLabels: {}
It might be actually related to this: https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/monitoring-additional-namespaces.md#monitoring-additional-namespaces
In order to monitor additional namespaces, the Prometheus server requires the appropriate
RoleandRoleBindingto be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via$.values.namespace.
Also mentioned at https://prometheus-operator.dev/kube-prometheus/kube/monitoring-other-namespaces/
It seems that this problem will be solved in the helm chart release 4.0.0 where #555 is also addressed. If you'd like to use an existing Prometheus instance which isn't deployed with the Pulsar Helm chart, it will be necessary to configure RBAC so that Prometheus has sufficient access to the namespace where Pulsar is deployed.
Claude generated RBAC is something like this, however it might not be correct. The official docs are at https://prometheus-operator.dev/kube-prometheus/kube/monitoring-other-namespaces/ .
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: prometheus-k8s
namespace: foo # Replace with your target namespace
rules:
# Core Kubernetes resources
- apiGroups:
- ""
resources:
- services
- endpoints
- pods
verbs:
- get
- list
- watch
# Prometheus Operator CRDs
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
- podmonitors
- prometheusrules
- probes
- alertmanagers
- prometheuses
- thanosrulers
verbs:
- get
- list
- watch
# For metric delegation and federation
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors/finalizers
- podmonitors/finalizers
- probes/finalizers
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: prometheus-k8s
namespace: foo # Replace with your target namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: prometheus-k8s
subjects:
- kind: ServiceAccount
name: prometheus-kube-prometheus-stack-prometheus # This name depends on your release name
namespace: monitoring # This should be the namespace where Prometheus is deployed
One detail is that Helm's --namespace/-n option should be used to set the namespace for the Pulsar deployment, including the kube-prometheus-stack deployment. The Pulsar Helm chart currently only supports a deployment where kube-prometheus-stack is deployed in the same namespace as Pulsar.
Reopening this issue since the dashboards don't connect to the datasource and that causes a problem.
Looks like this part should take care of adjusting the datasource https://github.com/grafana/helm-charts/blob/41e990ef5498feb7fd79c49d319675463c4a0f9f/charts/grafana/templates/_config.tpl#L122-L130