export swap behavior via label
This patch exports a label indicating the kubelet's configured swap behavior. By default, it reads from /var/lib/kubelet/config.yaml. If a custom kubelet config path is used, it can be set via the -kubelet-config-path flag.
If the swap behavior is not specified, NoSwap is assumed (matching kubelet's default).
Note: feature.node.kubernetes.io/memory-swap.behavior (or in general kubelet's memorySwap.swapBehavior) reflects whether k8s workloads are allowed to use swap. A node may have swap enabled, but if kubelet is set to NoSwap, pods cannot use it. As such, the behavior label will be exported only if node level swap is enabled.
feature.node.kubernetes.io/memory-swap: "true"
feature.node.kubernetes.io/memory-swap.behavior: LimitedSwap
Fixes: https://github.com/kubernetes-sigs/node-feature-discovery/issues/2178
Deploy Preview for kubernetes-sigs-nfd ready!
| Name | Link |
|---|---|
| Latest commit | f5adaa8675b34309e718e3835bc54b58c6e6d6d8 |
| Latest deploy log | https://app.netlify.com/projects/kubernetes-sigs-nfd/deploys/68cc00c97fb88a00086f5eeb |
| Deploy Preview | https://deploy-preview-2192--kubernetes-sigs-nfd.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify project configuration.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: fmuyassarov Once this PR has been reviewed and has the lgtm label, please assign marquiz for approval. For more information see the Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/test pull-node-feature-discovery-build-image-cross-generic
@fmuyassarov thank you for the patch and working on this. Let's get this finalized.
My main comment is that in topology-updater we already read the kubelet config (read from the kubelet configz endpoint by default), helpers are in
pkg/utils/kubeconf/. Could we do the same on nfd-worker?
I will check that.
Hi @marquiz . I've reworked the patch to utilize the configz. The current state is missing the documentation update which I thought of adding once I get a green light that this the right approach. Please take a look.
/retest
@fmuyassarov independent of where we logically put the new feature, I think the detection should be outside nfd-worker core package. If we keep it in memory.swap, then probably the feature name should somehow reflect that it's kubelet behavior.
@fmuyassarov independent of where we logically put the new feature, I think the detection should be outside nfd-worker core package. If we keep it in memory.swap, then probably the feature name should somehow reflect that it's kubelet behavior.
The reason for splitting the detection between memory and worker pkgs is that, memory pkg doesn't have access to the kubeconfig & kubelet. Technically speaking, there is no need to do anything within the memory pkg. But, the reason I used memory pkg is because we are detecting the memory related feature. Sure, it is via kubelet, but the main thing is not kubelet, but the memory. Since we already had a logic of detecting the swap memory within the memory pkg, I thought of extending it for swap memory type too.
@fmuyassarov: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-node-feature-discovery-verify-master | f5adaa8675b34309e718e3835bc54b58c6e6d6d8 | link | true | /test pull-node-feature-discovery-verify-master |
| pull-node-feature-discovery-e2e-test-master | f5adaa8675b34309e718e3835bc54b58c6e6d6d8 | link | true | /test pull-node-feature-discovery-e2e-test-master |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale