kube-prometheus icon indicating copy to clipboard operation
kube-prometheus copied to clipboard

Can't Access Prometheus,Grafana UI in NodePort way

Open oneslideicywater opened this issue 3 years ago • 10 comments

version

download zip lastest of this repo: https://github.com/prometheus-operator/kube-prometheus.git

  • docker: v1.21.0
  • kubernetes: v1.23.0

description

can't access pods such as promtheus-k8s-0 in monitoring namespace by creating a custom pod in the same namespace, I can't either expose those service prometheus-k8s in NodePort.

[root@localhost ingress]# kubectl exec -it prometheus-k8s-0 -n monitoring -- /bin/sh
/prometheus $ wget grafana:3000
Connecting to grafana:3000 (10.97.45.181:3000)
saving to 'index.html'

but using a custom pod nginx can't access grafana pods

[root@localhost ingress]# kubectl exec -it nginx -n monitoring -- /bin/bash
root@nginx:/# curl http://10.97.45.181:3000
curl: (28) Failed to connect to 10.97.45.181 port 3000: Connection timed out

edit the promtheus svc by expose the nodeport, but my brower can't access UI

# ref by step 3
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/instance: k8s
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.35.0
  name: prometheus-k8s-nodeport
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: web
    port: 9090
    targetPort: web
  - name: reloader-web
    port: 8080
    targetPort: reloader-web
  selector:
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/instance: k8s
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: kube-prometheus

how to reproduce

  1. download main branch zip of this repo: https://github.com/prometheus-operator/kube-prometheus.git
  2. follow the quick start but not port forward the service
  3. add a promtheus svc yaml above
  4. access the prometheus UI in the browser

oneslideicywater avatar May 21 '22 11:05 oneslideicywater

I have the same problem. I normally access prometheus with kubectl proxy and localhost:8001/api/v1/namespaces/monitoring/services/prometheus-k8s:9090/proxy/graph

this also doesn't work right now

edit: It's because, there are now networkpolicies :) after kubectl -n monitoring delete networkpolicies.networking.k8s.io --all it works as before

an example how to disable them can be found here: https://github.com/prometheus-operator/kube-prometheus/commit/030dec7656f9dfc62f39c931a0e0c0133bee259e

pschulten avatar May 27 '22 12:05 pschulten

You probably would want to use node-ports addon for that. See example using minikube here

slashpai avatar Jun 14 '22 14:06 slashpai

After spending way too long trying to troubleshoot a fresh installation that I was unable to access (and, of course, never thinking of network policies, because I knew that I never created any on the cluster....), I have two comments:

  1. Thank you, @pschulten , for pointing me in the right direction!
  2. Shouldn't such a change be CLEARLY documented somewhere??? It is not in the quickstart, and not in the "Access UIs" part of the docs.

bogd avatar Jul 15 '22 14:07 bogd

@bogd I agree.
I think "Access UIs" only for checking (not production). In my case, Really confused because Document below, https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/node-ports.md

I made it, but I couldn't connect with node ip:node port, so I thought it was a problem with another network.

cateto avatar Oct 17 '22 01:10 cateto

As far as I see, I don't see any reason to use NodePort with Network Policy, so maybe they should add to the "node-ports.md" that if the user choose to install Kube-Prometheus using Jssonet-builder they should also add another import - as mentioned earlier by @pschulten "(import 'kube-prometheus/addons/networkpolicies-disabled.libsonnet') +"

My example.jsonnet seems like the below (For learning purpose only!)

  (import 'kube-prometheus/main.libsonnet') +
  // Uncomment the following imports to enable its patches
  // (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
  // (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
  (import 'kube-prometheus/addons/node-ports.libsonnet') +
  // (import 'kube-prometheus/addons/static-etcd.libsonnet') +
  // (import 'kube-prometheus/addons/custom-metrics.libsonnet') +
  // (import 'kube-prometheus/addons/external-metrics.libsonnet') +
  // (import 'kube-prometheus/addons/pyrra.libsonnet') +
  (import 'kube-prometheus/addons/networkpolicies-disabled.libsonnet') +
  {
    values+:: {
      common+: {
        namespace: 'monitoring',
      },
    },
  };

{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{
  ['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
  for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
} +
// { 'setup/pyrra-slo-CustomResourceDefinition': kp.pyrra.crd } +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
// { ['pyrra-' + name]: kp.pyrra[name] for name in std.objectFields(kp.pyrra) if name != 'crd' } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }```

Devorkin avatar Oct 17 '22 19:10 Devorkin

Just for the record. I think restricting the access with NetworkPolicies is a good idea, that I will definitely pick up when I'm growing up

On Mon, 17 Oct 2022, 21:33 Yehonatan Devorkin, @.***> wrote:

As far as I see, I don't see any reason to use NodePort with Network Policy, so maybe they should add to the "node-ports.md" that if the user choose to install Kube-Prometheus using Jssonet-builder they should also add another import - as mentioned earlier by @pschulten https://github.com/pschulten "(import 'kube-prometheus/addons/networkpolicies-disabled.libsonnet') +"

My example.jsonnet seems like the below (For learning purpose only!) `local kp = (import 'kube-prometheus/main.libsonnet') + // Uncomment the following imports to enable its patches // (import 'kube-prometheus/addons/anti-affinity.libsonnet') + // (import 'kube-prometheus/addons/managed-cluster.libsonnet') + (import 'kube-prometheus/addons/node-ports.libsonnet') + // (import 'kube-prometheus/addons/static-etcd.libsonnet') + // (import 'kube-prometheus/addons/custom-metrics.libsonnet') + // (import 'kube-prometheus/addons/external-metrics.libsonnet') + // (import 'kube-prometheus/addons/pyrra.libsonnet') + (import 'kube-prometheus/addons/networkpolicies-disabled.libsonnet') + { values+:: { common+: { namespace: 'monitoring', }, }, };

{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } + { ['setup/prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator)) } + // { 'setup/pyrra-slo-CustomResourceDefinition': kp.pyrra.crd } + // serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready { 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } + { 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } + { 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } + { ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + { ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } + { ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } + // { ['pyrra-' + name]: kp.pyrra[name] for name in std.objectFields(kp.pyrra) if name != 'crd' } + { ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + { ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) } { ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + { ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + { ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }`

— Reply to this email directly, view it on GitHub https://github.com/prometheus-operator/kube-prometheus/issues/1763#issuecomment-1281375028, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABE44E2HBNEKXZOKUI6OMGLWDWSZPANCNFSM5WRWHJWQ . You are receiving this because you were mentioned.Message ID: @.***>

pschulten avatar Oct 17 '22 19:10 pschulten

I faced the same issue when deploying the grafana helm chart even though the default networkpolicy for prometheus-k8s allows Ingress traffic that has the app.kubernetes.io/app=grafana label, which is present on the grafana deployment. Deleting the networkpolicy fixed this.

tobiasmuehl avatar Nov 03 '22 11:11 tobiasmuehl

Shouldn't default network policies at least allow health checks? I had to delete the network policies just to get the grafana and prometheus pods healthy.

kskalski avatar Jan 08 '23 09:01 kskalski

Just wanted to post this in case someone else was stuck on the same problem.

If you are trying to use a NodePort service with kube-prometheus and have deleted all the network policies in monitoring (if required) and still can't get the health checks to pass, take a look at the selector on your custom grafana type: NodePort service. We were upgrading from release-0.7 which used

  selector:
    app: grafana

and it seems release-0.12 has changed to:

  selector:
    app.kubernetes.io/component: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/part-of: kube-prometheus

natehudson avatar Apr 04 '23 17:04 natehudson

For those like me who are not comfortable with deleting network policies, I'd suggest this simple workaround to make NodePort work: add another rule to the network policy to allow you access the pod. The rule I added below is just to allow the pod to be accessed on the port 3000 (grafana client) from my home net router ip address (x.x.x.x/32). In my network set up ( weave ) no source ip masquerading is applied, so packets coming into the pod will have my home net router source ip. If your network set up does some sort of masquerading, correct for this. If you are using Linux do "curl ifconfig.me" to get you router ip. New Grafana network policy rules:

ingress:

  • from:
    • podSelector: matchLabels: app.kubernetes.io/name: prometheus ports:
    • port: 3000 protocol: TCP
  • from:
    • ipBlock: cidr: x.x.x.x/32 # ${your home net router ip address}/32 podSelector: matchLabels: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus policyTypes:
  • Egress
  • Ingress

AlessandroFazio avatar Sep 18 '23 08:09 AlessandroFazio