goldpinger
goldpinger copied to clipboard
Include PodSpec.nodeName as a label in goldpinger_peers_response_time_s_* metrics
Hey guys!
Happy New Year!!! 🎉
Super cool project, we've recently deployed this in our clusters and it's proving to be super useful.
I was wondering if it would be possible to add a nodeName
label to these (goldpinger_peers_response_time_s_*
) metrics so it would be easier to identify the target nodes being probed.
the node name should be available in the PodSpec.nodeName
Super cool project, we've recently deployed this in our clusters and it's proving to be super useful.
Glad we could help !
I don't see why not - it would put some extra strain on Prometheus with the extra time series, but it would indeed make for better graphs and alerts.
👍
👍
seeing at the source, it seems the code only care about HostIP of ping targets (at least in the prometheus metrics). I believe it'd be more useful when debugging issues if we can somehow give more context of the target nodes.
In the monitoring dashboard/alerting, instead of this
"node A and B are reporting that some other unknown nodes are down"
this would be better
"node X and Y are reported down by 2 nodes"
Hey @seeker89 :smile:
Do you think this is something you would consider implementing in the near future? we particularly feel the this is missing for alerting, where we know some node(s) are unhealthy but can't pinpoint from the metrics.
Hey @dannyk81 !
Now, that we've merged https://github.com/bloomberg/goldpinger/pull/53 in I think we can add some more context to the metrics.
So, just to make sure I understand what your use case is - what extra labels on which metrics do you need ?
Great news!
I'll try and summarize our current experience:
-
goldpinger_peers_response_time_*
- these metrics have thepod_ip
andhost_ip
labels of the pod being probed, but not thehost_name
. -
goldpinger_nodes_health_total
- this metric is useful, however currently very difficult to use (for alerting and troubleshooting), since it provides a way to alert when nodes are misbehaving but the challenge is how to identify that node? actually, right now we have an alert in one of our clusters but we can't deduct from the metrics which node is actually misbehaving, we just know that one node is unhealthy (this relates to @akhy's comment I believe)
/edit: I think it would also be great if the host name can be used on Goldpinger's UI and log instead of the host IP, it would make things much easier when trying to troubleshoot things.
wdyt @seeker89?
I tried going through the code yesterday to see if I could work out how to implement this, but to be honest getting a bit lost... it seems like the podIPs (and hostIPs) are used as primary identifiers almost everywhere and passed around in ad-hoc maps, I suppose we'll need to define a struct for that and use it throughout the code?
Generally, I think it would be great if we could substitute pod IPs with pod names and host IPs with host names (probably keeping the IP data as well though, but make it secondary) because having just the IP addresses displayed is rather confusing.
It would be (at least in my pov) so much more human friendly if the metrics and the UI (graph, heatmap, etc..) referenced the pod names + host(node) names, instead of plain IPs.
Sorry if I'm going out of scope here.
Some good points here. I'll have a look when I get some free bandwidth. 👍
@seeker89 just curious if you had a chance to look into this? :pray:
Sorry, not yet. I did set some time for goldpinger next week, might be able to get into this.
An example latency check that works well!
- alert: goldpinger-node-latency
expr: |
sum (rate(goldpinger_peers_response_time_s_sum{call_type="ping"}[1m]))
by (goldpinger_instance, host_ip, pod) > 0.040
for: 1m
annotations:
description: |
Goldpinger pod {{ "{{ $labels.pod }}" }} on node {{ "{{ $labels.goldpinger_instance }}" }} cannot reach remote node at IP {{ "{{ $labels.host_ip }}" }} in less than 40 ms!
summary: Node {{ "{{ $labels.host_ip }}" }} likely fubar. Overlay network latency should be less than 40ms!
labels:
severity: critical