zephyr
zephyr copied to clipboard
drivers: adc: use K_KERNEL_STACK_SIZEOF()
Use K_KERNEL_STACK_SIZEOF()
for calculating thread stack size, as this takes K_KERNEL_STACK_RESERVED
into account.
Fixes: #69129 Fixes: #69130 Fixes: #69131 Fixes: #69132 Fixes: #69133
But selector with role different from the job role can't be used to filter objects, but only determine whether to attach extra labels to targets. For example,
- job_name: "job1"
kubernetes_sd_configs:
- role: endpoints
selectors:
- role: "endpoints"
label: "app.kubernetes.io/component=metrics"
- role: "pod"
label: "app.kubernetes.io/component=metrics"
- job_name: "job2"
kubernetes_sd_configs:
- role: endpoints
selectors:
- role: "endpoints"
label: "app.kubernetes.io/component=metrics"
- role: "pod"
label: "app.kubernetes.io/component=unexisted-label"
The targets discovered by both job is the same, but job1's target will have matched pod labels __meta_kubernetes_pod_*
.
So I think the doc may need some clarification as well, or is this a bug which need to be fixed?
The endpoints role supports pod, service and endpoints selectors. The pod role supports node selectors when configured with
attach_metadata: {node: true}
.
The pod role supports node selectors when configured with attach_metadata: {node: true}.
@fpetkovski you added that line in #10080; can you comment on this?
I looked at the code change in this PR, and I can't see how it could make things work. You changed one test file, and I think avoided an error being generated, but you didn't do anything to change how selectors are applied.
Thanks for the heads up, I will take a look by tomorrow.
But selector with role different from the job role can't be used to filter objects, but only determine whether to attach extra labels to targets.
I think this is the right answer here, and I suggest we add it to the docs. I would assume that filtering by node labels can still be done with a relabel config on attached node labels.
I looked at the code change in this PR, and I can't see how it could make things work. You changed one test file, and I think avoided an error being generated, but you didn't do anything to change how selectors are applied.
The trick is that the node selectors are used by the node informer which is started only when "attach_metadata: {node: true}" is defined. The code already does the right thing, it's only because the config validation rejects node selectors for pod role that users can't benefit from it.
IIUC one use case would be to discover pods from 1 specific node and attach the node metadata:
- job_name: "xxx"
kubernetes_sd_configs:
- role: pod
selectors:
- role: "pod"
field: "spec.nodeName==foo"
- role: "node"
field: "metadata.name==foo"
Not sure if I could find other use cases.
In a large k8s cluster, we want to monitor pods running on a subset of nodes. Relabeling is costy. Most pods are dropped by relabeling and the process causes high CPU consumption.
job_name: test
kubernetes_sd_configs:
- role: pod
selectors:
- role: pod
field: status.phase=Running
- role: node
label: node.kubernetes.io/instance-type=eklet
node: true
We met two problems here:
- The configuration above is invalid currently
- From the code currently, node selector cannot filter out undesired pods
selectors
configuration is confusing here
In a large k8s cluster, we want to monitor pods running on a subset of nodes. Relabeling is costy. Most pods are dropped by relabeling and the process causes high CPU consumption.
job_name: test kubernetes_sd_configs: - role: pod selectors: - role: pod field: status.phase=Running - role: node label: node.kubernetes.io/instance-type=eklet node: true
We met two problems here:
1. The configuration above is invalid currently 2. From the code currently, node selector cannot filter out undesired pods
selectors
configuration is confusing here
I assume you're putting node: true
under attach_metadata
https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config
The pod selector you're using will not restrict discovery to certain nodes, you'll need to use sth like field: "spec.nodeName==foo"
in https://github.com/prometheus/prometheus/pull/13423#issuecomment-2117784292
Hello from the bug-scrub! @Haleygo do you think you will come back to this?