[🐛 Bug]: Forceful kube-prometheus-stack chart installation even when disabled
What happened?
When trying to deploy the monitoring exporter for the selenium grid, I found that the helm chart wants to install kube-prometheus-stack chart dependency even when it's enabled flag is set to false.
This is because the dependency check in the helm Chart.yaml is an OR operator instead of an AND: https://github.com/SeleniumHQ/docker-selenium/blob/d9f2777614a95bca9b9660911315d5cd96677070/charts/selenium-grid/Chart.yaml#L24
Command used to start Selenium Grid with Docker (or Kubernetes)
selenium-operator:
global:
seleniumGrid:
imageRegistry: XXXXX
imageTag: 4.35.0-20250808
nodesImageTag: 4.35.0-20250808
kubectlImage: XXXXX
ingress:
enabled: false
basicAuth:
enabled: false
autoscaling:
enableWithExistingKEDA: true
scalingType: deployment
scaledOptions:
minReplicaCount: 1
maxReplicaCount: 100
pollingInterval: 5
#scaledJobOptions:
# scalingStrategy:
# strategy: default
# Configuration for isolated components (applied only if `isolateComponents: true`)
components:
router:
nodeSelector:
grid: "true"
# Configuration for distributor component
distributor:
nodeSelector:
grid: "true"
# Configuration for Event Bus component
eventBus:
nodeSelector:
grid: "true"
# Configuration for Session Map component
sessionMap:
nodeSelector:
grid: "true"
# Configuration for Session Queue component
sessionQueue:
nodeSelector:
grid: "true"
# Configuration for selenium hub deployment (applied only if `isolateComponents: false`)
hub:
imageTag: 4.35.0-20250808
imageRegistry: XXXXXXXX
nameOverride: selenium-operator-hub
resources:
requests:
memory: "8Gi"
cpu: "1"
limits:
memory: "12Gi"
cpu: "3"
nodeSelector:
fanduel.com/spot: utils
tolerations:
- key: "fanduel.com/spot"
operator: "Equal"
value: "utils"
effect: "NoSchedule"
# Configuration for chrome nodes
chromeNode:
enabled: true
deploymentEnabled: true
replicas: 2
imageRegistry: XXXXX
imageTag: 138.0-20250808
nameOverride: selenium-operator-chrome-node
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "3Gi"
cpu: "1500m"
nodeSelector:
fanduel.com/spot: utils
tolerations:
- key: "fanduel.com/spot"
operator: "Equal"
value: "utils"
effect: "NoSchedule"
extraEnvironmentVariables:
- name: SE_NODE_GRID_URL
value: ""
- name: SCREEN_WIDTH
value: "1600"
- name: SCREEN_HEIGHT
value: "900"
dshmVolumeSizeLimit: 2Gi
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "api.lab.amplitude.com"
# Configuration for firefox nodes
firefoxNode:
enabled: false
# Configuration for edge nodes
edgeNode:
enabled: false
kube-prometheus-stack:
enabled: false
jaeger:
enabled: false
monitoring:
enabled: true
exporter:
replicas: 1
imageRegistry: 077700697743.dkr.ecr.us-east-1.amazonaws.com/docker-hub/ricardbejarano
imageName: "graphql_exporter"
imageTag: "v1.2.4"
port: 9199
service:
enabled: true
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9199"
Relevant log output
No logs, kube-prometheus-stack is installed
Operating System
EKS
Docker Selenium version (image tag)
4.35.0-20250808
Selenium Grid chart version (chart version)
0.43.2
@miguel-cardoso-mindera, thank you for creating this issue. We will troubleshoot it as soon as we can.
Info for maintainers
Triage this issue by using labels.
If information is missing, add a helpful comment and then I-issue-template label.
If the issue is a question, add the I-question label.
If the issue is valid but there is no time to troubleshoot it, consider adding the help wanted label.
If the issue requires changes or fixes from an external project (e.g., ChromeDriver, GeckoDriver, MSEdgeDriver, W3C),
add the applicable G-* label, and it will provide the correct link and auto-close the
issue.
After troubleshooting the issue, please add the R-awaiting answer label.
Thank you!
Hi, monitoring.enabled will enable both resource and sub-chart stack.
To deploy resource only, try to use monitoring.enabledWithExistingAgent
https://github.com/SeleniumHQ/docker-selenium/blob/trunk/charts/selenium-grid/CONFIGURATION.md
Hey @VietND96 thanks for the response.
That seems to not have worked, after changing my values.yaml to:
kube-prometheus-stack:
enabled: false
jaeger:
enabled: false
monitoring:
enabled: true
enabledWithExistingAgent: true
exporter:
replicas: 1
imageRegistry: quay.io/ricardbejarano
imageName: "graphql_exporter"
imageTag: "v1.2.4"
port: 9199
service:
enabled: true
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9199"
It still wants to create the prometheus-node-exporter:
Nope, should not put 2 keys together. Only one at a time
monitoring:
enabledWithExistingAgent: true
exporter:
replicas: 1
imageRegistry: quay.io/ricardbejarano
imageName: "graphql_exporter"
imageTag: "v1.2.4"
port: 9199
service:
enabled: true
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9199"
Thanks, that has worked
Yes, does scrape data reach your Prometheus endpoint?
We are using datadog but yes I can scrape those by using the annotations.
However these are the metrics the exporter shows:
# HELP go_gc_duration_seconds A summary of the wall-time pause (stop-the-world) duration in garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.47e-05
go_gc_duration_seconds{quantile="0.25"} 3.47e-05
go_gc_duration_seconds{quantile="0.5"} 9.43e-05
go_gc_duration_seconds{quantile="0.75"} 0.000216101
go_gc_duration_seconds{quantile="1"} 0.000216101
go_gc_duration_seconds_sum 0.000345101
go_gc_duration_seconds_count 3
# HELP go_gc_gogc_percent Heap size target percentage configured by the user, otherwise 100. This value is set by the GOGC environment variable, and the runtime/debug.SetGCPercent function. Sourced from /gc/gogc:percent.
# TYPE go_gc_gogc_percent gauge
go_gc_gogc_percent 100
# HELP go_gc_gomemlimit_bytes Go runtime memory limit configured by the user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT environment variable, and the runtime/debug.SetMemoryLimit function. Sourced from /gc/gomemlimit:bytes.
# TYPE go_gc_gomemlimit_bytes gauge
go_gc_gomemlimit_bytes 9.223372036854776e+18
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 7
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.24.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated in heap and currently in use. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.61384e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated in heap until now, even if released already. Equals to /gc/heap/allocs:bytes.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 6.355584e+06
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table. Equals to /memory/classes/profiling/buckets:bytes.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 3792
# HELP go_memstats_frees_total Total number of heap objects frees. Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 15665
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata. Equals to /memory/classes/metadata/other:bytes.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 2.631248e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and currently in use, same as go_memstats_alloc_bytes. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 2.61384e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used. Equals to /memory/classes/heap/released:bytes + /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 3.260416e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use. Equals to /memory/classes/heap/objects:bytes + /memory/classes/heap/unused:bytes
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.34176e+06
# HELP go_memstats_heap_objects Number of currently allocated objects. Equals to /gc/heap/objects:objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 3104
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS. Equals to /memory/classes/heap/released:bytes.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 2.072576e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system. Equals to /memory/classes/heap/objects:bytes + /memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes + /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 7.602176e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.7574260479488373e+09
# HELP go_memstats_mallocs_total Total number of heap objects allocated, both live and gc-ed. Semantically a counter version for go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 18769
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 19328
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system. Equals to /memory/classes/metadata/mcache/inuse:bytes + /memory/classes/metadata/mcache/free:bytes.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 31408
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 116000
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system. Equals to /memory/classes/metadata/mspan/inuse:bytes + /memory/classes/metadata/mspan/free:bytes.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 130560
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place. Equals to /gc/heap/goal:bytes.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.825458e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations. Equals to /memory/classes/other:bytes.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.617472e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes obtained from system for stack allocator in non-CGO environments. Equals to /memory/classes/heap/stacks:bytes.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 786432
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator. Equals to /memory/classes/heap/stacks:bytes + /memory/classes/os-stacks:bytes.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 786432
# HELP go_memstats_sys_bytes Number of bytes obtained from system. Equals to /memory/classes/total:byte.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.2803088e+07
# HELP go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS setting, or the number of operating system threads that can execute user-level Go code simultaneously. Sourced from /sched/gomaxprocs:threads.
# TYPE go_sched_gomaxprocs_threads gauge
go_sched_gomaxprocs_threads 16
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 10
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.03
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048575e+06
# HELP process_network_receive_bytes_total Number of bytes received by the process over the network.
# TYPE process_network_receive_bytes_total counter
process_network_receive_bytes_total 7883
# HELP process_network_transmit_bytes_total Number of bytes sent by the process over the network.
# TYPE process_network_transmit_bytes_total counter
process_network_transmit_bytes_total 34640
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 9
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.5044608e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.75742587002e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.266413568e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 14
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
I'm a bit confused since these seem default prometheus metrics and not metrics related to the selenium grid
Took me all day but finally got metrics showing up in Datadog by doing this:
kube-prometheus-stack:
enabled: false
jaeger:
enabled: false
monitoring:
enabledWithExistingAgent: true
exporter:
replicas: 1
imageRegistry: quay.io/ricardbejarano
imageName: "graphql_exporter"
imageTag: "v1.2.4"
port: 9199
service:
enabled: true
annotations:
ad.datadoghq.com/graphql-exporter.checks: |
{
"openmetrics": {
"init_config": {},
"instances": [
{
"openmetrics_endpoint": "http://%%host%%:%%port%%/query?endpoint=http://selenium-operator-hub.selenium:4444/graphql&query=%7Bgrid%7BsessionCount%20maxSession%20totalSlots%20nodeCount%20sessionQueueSize%7D%20nodesInfo%7Bnodes%7BsessionCount%20maxSession%20slotCount%7D%7D%20sessionsInfo%7BsessionQueueRequests%7D%7D",
"namespace": "selenium_grid",
"tls_verify": false,
"metrics": [
{ "query_grid_sessionCount": "session_count" },
{ "query_grid_maxSession": "max_sessions" },
{ "query_grid_totalSlots": "total_slots" },
{ "query_grid_nodeCount": "node_count" },
{ "query_grid_sessionQueueSize": "session_queue_size" },
{ "query_nodesInfo_nodes_sessionCount": "node_session_count" },
{ "query_nodesInfo_nodes_maxSession": "node_max_sessions" },
{ "query_nodesInfo_nodes_slotCount": "node_slot_count" },
{ "query_sessionsInfo_sessionQueueRequests": "session_queue_requests" }
]
}
]
}
}
Unfortunately, I cannot leverage the scraping config and it leaves things a bit hardcoded but this is probably a limitation on Datadog side and not yours
Yes, since I have not tested it to be compatible with Datadog yet. Here is the scrape that used to configure - https://github.com/SeleniumHQ/docker-selenium/blob/trunk/charts/selenium-grid/configs/scrape/selenium-grid.yaml quay.io/ricardbejarano/graphql_exporter is the proxy that exposes via endpoint /query for scaping Meanwhile, graphql_exporter also does the query to the Hub GraphQL by the query statement in the config file Proxy graphql_exporter convert GraphQL response JSON to metrics automatically Can you try updating the scaping endpoint <RELEASENAME>-metrics-exporter:9199 to see if you can do something similar to the prometheus scrape config?
Not sure I follow, that's what I already did right?
I don't think there's a way to make the Datadog agent use the prometheus scraping config, that's why I added the annotation above to the quay.io/ricardbejarano/graphql_exporter
A lot of metrics have "node=0" labels instead of using an already existing nodeId. Same thing happens for sessions. Is this by design?
Running in autoscaling mode, isn't it expected that that this label will remain with the same value for different nodes and sessions and will therefore be misleading?
For example, there can only be 1 session per node, therefore all sessions labels will be labeled "session 0", so then if I query the session duration by node I am presented with this:
Which makes no sense since the duration cannot go down, however we can't know when a session ends since I cant groupBy session nor do I know for which node, since "node-0" actually corresponds to multiple pods that have been scaled up and down