Expose promExporter to service or Prometheus could not scrape
What motivated this proposal?
promExporter port 777 should expose to nats service so Prometheus can scrape
What is the proposed change?
service:
ports:
monitor:
enabled: true
should add promExporter: enabled: true here
Who benefits from this change?
Who doesn't want to install prometheus-nats-exporter from https://prometheus-community.github.io/helm-charts
and can use embedded pod by set enable = true
What alternatives have you evaluated?
I must write a new service to expose prompExporter 777 and update Prometheuse scrape config
It's already enabled via the headless service, example: nats-headless:7777 Also you want to scrape each pod, not just 1 pod via a ClusterIP Service.
Thank you for the really fast reply I run wget in Prometheus and it works
What makes me confused is port 7777 isn't listed in the headless service port table. But looks like it doesn't matter, we still can connect to all ports anyway
Ah, maybe we should go ahead and add it to the Headless Service port list if its enabled then. If you're not doing any network policy it should still work though.
I have 1 more question about promExporter. Is it compatible with Grafana Nats template? https://grafana.com/grafana/dashboards/2279-nats-servers/
The thing is when I install a new chart from https://artifacthub.io/packages/helm/prometheus-community/prometheus-nats-exporter Grafana works fine.
But if I use embedded promExporter, Grafana always shows no data
more information:
Prometheus scape works fine:
I have 1 more question about promExporter. Is it compatible with Grafana Nats template? https://grafana.com/grafana/dashboards/2279-nats-servers/
The thing is when I install a new chart from https://artifacthub.io/packages/helm/prometheus-community/prometheus-nats-exporter Grafana works fine.
But if I use embedded promExporter, Grafana always shows no data
Did you you realice how to use the embedded promExporter on Grafana, show no data to me as well., even using the chart for helm https://github.com/nats-io/prometheus-nats-exporter/blob/main/walkthrough/grafana-jetstream-dash-helm.json
Hii, Is is possible to scrape from all 3 replicas of statefulset pods. job_name: 'nats-jet' static_configs: - targets: - nats-stgfra-nats-helm-headless.nats-fra.svc.cluster.local:7777
Presently with this config i see only one replia being scraped.
I was able to get this working in GKE autopilot with managed Prometheus. with GKE autopilot you need to use pod anti-affinity to distribute the replicas across nodes.
Here is my config:
# Deploy NATS JetStream using Helm
resource "helm_release" "nats" {
name = "nats"
namespace = kubernetes_namespace.nats.metadata[0].name
repository = "https://nats-io.github.io/k8s/helm/charts/"
chart = "nats"
version = "1.2.10" # Using the latest version
wait = true
values = [
<<-EOT
config:
cluster:
enabled: true
replicas: ${var.nats_replicas}
jetstream:
enabled: true
fileStore:
pvc:
size: ${var.nats_pvc_size}
service:
enabled: true
type: ClusterIP
ports:
nats:
enabled: true
port: 4222
cluster:
enabled: true
port: 6222
monitoring:
enabled: true
port: 8222
promExporter:
enabled: true
port: 7777
podMonitor:
enabled: true
merge:
apiVersion: "monitoring.googleapis.com/v1"
kind: "PodMonitoring"
spec:
endpoints:
- port: "prom-metrics"
interval: "30s"
path: "/metrics"
scheme: "HTTP"
timeout: 10s
podTemplate:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app.kubernetes.io/name: nats
EOT
]
}