is privileged really required for device plugins?
On the device plugins page, it says
The canonical directory /var/lib/kubelet/device-plugins requires privileged access, so a device plugin must run in a privileged security context.
However, there are several implementations of the device plugin that aren't doing this, such as smarter-device-manager and Nvidia's k8s device plugin.
How are these plugins working? Is privileged not really required?
The docs are potentially misleading; they are talking about the security context of the Pod generally and not the securityContext field specifically.
We could try to reword to avoid confusion. It will be a bit tricky to come up with a good phrasing.
/language en
Some GPU experts may help answer this question and/or provide insights on how things actually work. We did come across this question too when evaluating the impacts on GPU resources when running Pods in rootless mode. In the case of Nvidia device plugin, they require a special container runtime, that could be an indicator.
@sftim I suspect others have had this same misunderstanding as me. For example generic-device-plugin and kubevirt's device plugins are all running (unnecessarily?) as privileged.
@tengqm I think the issue is whether the device plugin needs privileged in order to use the socket in /var/lib/kubelet/device-plugins. Usually it doesn't need special security in order to look around in /dev/ to determine which devices are available, tho the details may differ depending on the specific device (such as nvidia).
/triage accepted Sound that something we need to clarify on docs.
/sig node
/sig security
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.