zephyr icon indicating copy to clipboard operation
zephyr copied to clipboard

drivers: adc: use K_KERNEL_STACK_SIZEOF()

Open henrikbrixandersen opened this issue 1 year ago • 7 comments

Use K_KERNEL_STACK_SIZEOF() for calculating thread stack size, as this takes K_KERNEL_STACK_RESERVED into account.

Fixes: #69129 Fixes: #69130 Fixes: #69131 Fixes: #69132 Fixes: #69133

henrikbrixandersen avatar Feb 17 '24 15:02 henrikbrixandersen

But selector with role different from the job role can't be used to filter objects, but only determine whether to attach extra labels to targets. For example,

  - job_name: "job1"
    kubernetes_sd_configs:
      - role: endpoints
        selectors:
          - role: "endpoints"
            label: "app.kubernetes.io/component=metrics"
          - role: "pod"
            label: "app.kubernetes.io/component=metrics"
  - job_name: "job2"
    kubernetes_sd_configs:
      - role: endpoints
        selectors:
          - role: "endpoints"
            label: "app.kubernetes.io/component=metrics"
          - role: "pod"
            label: "app.kubernetes.io/component=unexisted-label"

The targets discovered by both job is the same, but job1's target will have matched pod labels __meta_kubernetes_pod_*. image

So I think the doc may need some clarification as well, or is this a bug which need to be fixed?

The endpoints role supports pod, service and endpoints selectors. The pod role supports node selectors when configured with attach_metadata: {node: true}.

Haleygo avatar Jan 18 '24 15:01 Haleygo

The pod role supports node selectors when configured with attach_metadata: {node: true}.

@fpetkovski you added that line in #10080; can you comment on this?

I looked at the code change in this PR, and I can't see how it could make things work. You changed one test file, and I think avoided an error being generated, but you didn't do anything to change how selectors are applied.

bboreham avatar Feb 07 '24 11:02 bboreham

Thanks for the heads up, I will take a look by tomorrow.

fpetkovski avatar Feb 07 '24 12:02 fpetkovski

But selector with role different from the job role can't be used to filter objects, but only determine whether to attach extra labels to targets.

I think this is the right answer here, and I suggest we add it to the docs. I would assume that filtering by node labels can still be done with a relabel config on attached node labels.

fpetkovski avatar Feb 09 '24 07:02 fpetkovski

I looked at the code change in this PR, and I can't see how it could make things work. You changed one test file, and I think avoided an error being generated, but you didn't do anything to change how selectors are applied.

The trick is that the node selectors are used by the node informer which is started only when "attach_metadata: {node: true}" is defined. The code already does the right thing, it's only because the config validation rejects node selectors for pod role that users can't benefit from it.

IIUC one use case would be to discover pods from 1 specific node and attach the node metadata:

  - job_name: "xxx"
    kubernetes_sd_configs:
      - role: pod
        selectors:
          - role: "pod"
            field: "spec.nodeName==foo"
          - role: "node"
            field: "metadata.name==foo"

Not sure if I could find other use cases.

simonpasquier avatar May 17 '24 14:05 simonpasquier

In a large k8s cluster, we want to monitor pods running on a subset of nodes. Relabeling is costy. Most pods are dropped by relabeling and the process causes high CPU consumption.

job_name: test
kubernetes_sd_configs:
  - role: pod
    selectors:
      - role: pod
        field: status.phase=Running
      - role: node
        label: node.kubernetes.io/instance-type=eklet
    node: true

We met two problems here:

  1. The configuration above is invalid currently
  2. From the code currently, node selector cannot filter out undesired pods

selectors configuration is confusing here

Sniper91 avatar Jun 04 '24 06:06 Sniper91

In a large k8s cluster, we want to monitor pods running on a subset of nodes. Relabeling is costy. Most pods are dropped by relabeling and the process causes high CPU consumption.

job_name: test
kubernetes_sd_configs:
  - role: pod
    selectors:
      - role: pod
        field: status.phase=Running
      - role: node
        label: node.kubernetes.io/instance-type=eklet
    node: true

We met two problems here:

1. The configuration above is invalid currently

2. From the code currently, node selector cannot filter out undesired pods

selectors configuration is confusing here

I assume you're putting node: true under attach_metadata https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config

The pod selector you're using will not restrict discovery to certain nodes, you'll need to use sth like field: "spec.nodeName==foo" in https://github.com/prometheus/prometheus/pull/13423#issuecomment-2117784292

machine424 avatar Jun 25 '24 16:06 machine424

Hello from the bug-scrub! @Haleygo do you think you will come back to this?

bboreham avatar Aug 13 '24 11:08 bboreham