sig-storage-local-static-provisioner
sig-storage-local-static-provisioner copied to clipboard
custom SecurityContextConstraint to run sig-storage/local-volume-provisioner in a DaemonSet?
Hi,
Is there any documentation on which custom SecurityContextConstraint that can be used to run registry.k8s.io/sig-storage/local-volume-provisioner in a daemonset without having "privileged=true" enabled? This is not allowed anymore in OKD/Openshift.
Proper documentation around this would be very useful. (using an operator is not possible here btw)
I am using the following scc which looks to be working for now but the volume gets a "permission denied":
scc.yaml:
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities:
- CHOWN
- FSETID
- SETGID
- SETUID
- NET_BIND_SERVICE
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
groups: []
kind: SecurityContextConstraints
metadata:
name: scc-local-provisioner
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- DAC_OVERRIDE
- FOWNER
- SETPCAP
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
seccompProfiles:
- runtime/default
supplementalGroups:
type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- hostPath
- projected
- secret
clusterrole.yaml:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:openshift:scc:scc-local-provisioner
rules:
- apiGroups:
- security.openshift.io
resourceNames:
- scc-local-provisioner
resources:
- securitycontextconstraints
verbs:
- use
rolebinding_scc.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system:openshift:scc:scc-local-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:openshift:scc:scc-local-provisioner
subjects:
- kind: ServiceAccount
name: local-static-provisioner
namespace: kube-system
oc logs -f local-static-provisioner:
E0514 17:31:15.318990 1 discovery.go:221] Failed to discover local volumes: error reading directory: open /mnt/local-disks: permission denied
@jonasbartho What i noticed in scc.yaml your current SCC lacks certain capabilities that might be required. Specifically, the SYS_ADMIN capability is often needed for operations involving hostPath volumes. So try using
allowedCapabilities:
- CHOWN
- FSETID
- SETGID
- SETUID
- NET_BIND_SERVICE
- SYS_ADMIN
and check whether it run fine or not.
Here One more thing what i want to ask is Had you given the appropriate permission to the directory /mnt/local-disks So that the container can access it?
If you forget and hadnt given appropriate permission to the directory /mnt/local-disks You can use
sudo chmod -R 777 /mnt/local-disks
Command to give permission to the container to access it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.