Support for "restricted" pod security policy
Describe the feature
With Kubernetes 1.25 Pod Security Admission becomes a standard to enforce a particular Pod Security Standard. For supporting the policy restricted, specific fields need to be provided in a Pod .spec (like dropping capabilities). Knative needs to adapt these fields in:
- The Knative control- and data-plane
- User deployed workloads like pods created on behalf of KServices or Eventing sources (like the apiserversource deployment).
When warnings are switched on, we see currently the following 16 violations:
Admission violations
* would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "gather", "copy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "gather", "copy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "gather", "copy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "gather", "copy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "activator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "activator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "autoscaler", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "autoscaler", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "autoscaler-hpa", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "autoscaler-hpa", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "controller", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "controller", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "dispatcher", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "dispatcher", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "domain-mapping", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "domain-mapping", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "domainmapping-webhook", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "domainmapping-webhook", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "eventing-controller", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "eventing-controller", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "eventing-webhook", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "eventing-webhook", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "filter", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "filter", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "ingress", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "ingress", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "mt-broker-controller", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "mt-broker-controller", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "webhook", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "webhook", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": unrestricted capabilities (container "dispatcher" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "dispatcher" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
* would violate PodSecurity "restricted:v1.24": unrestricted capabilities (container "migrate" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "migrate" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
This issue is not specific to serving (it also applies to eventing), but starting a tracking umbrella issue here nevertheless.
@rhuss I guess it is not just serving
I see a lot of violations for kube-rbac-proxy, which is not in our manifests, IIRC?
With that said, Kubernetes has annoying defaults, which means we need to explicitly fill in the following to be more secure than the default:
securityContext.capabilities.dropsecurityContext.seccompProfileneeds to beRuntimeDefault(not the default?!) orLocalhostsecurityContext.runAsNonRoot(I can sort of understand this one, as the dockerUSERmetadata is not available at the apiserver)
Amusingly, I think even if you Enable the use of RuntimeDefault as the default seccomp profile for all workloads, you need to select it explicitly...
Also, that last default is finally beta in 1.25.
And:
It is possible that the default profiles differ between container runtimes and their release versions, for example when comparing those from CRI-O and containerd.
I see a lot of violations for
kube-rbac-proxy, which is not in our manifests, IIRC?
yeah, sorry. I picked the log files from our midstream CI, but I think the point is clear, and you already identified the three fields we probably all have to add manually to every Pod we are creating directly or indirectly.
We've done runAsNonRoot to most of our containers already, but not the other two, which I think were introduced more recently.
seccompProfile is the one that really kills me, but /shrug.
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
/remove-lifecycle stale