terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
default_allow_privilege_escalation should not be written to manifest if not specified
If default_allow_privilege_escalation in a kubernetes_pod_security_policy resource is not specified then it should not be written to the PodSecurityPolicy manifest. With it set to true it will explicitly set a pod's value to true if unspecified in the pod manifest which causes issues with Fargate (Pod not supported on Fargate: invalid SecurityContext fields: AllowPrivilegeEscalation), and if you set it to false it will cause clashes when you have specified a pod as privileged but have neglected to include allowPrivilegeEscalation. Omitting the field entirely will fall back to Kubernetes' implicit permissions which is the desired behaviour.
Terraform Version, Provider Version and Kubernetes Version
Terraform version: v0.14.11
Kubernetes provider version: v2.1.0
Kubernetes version: v1.19.8
Affected Resource(s)
- kubernetes_pod_security_policy
Terraform Configuration Files
resource "kubernetes_pod_security_policy" "privileged" {
metadata {
name = "privileged"
annotations = {
"seccomp.security.alpha.kubernetes.io/allowedProfileNames" = "*"
}
labels = {
"app.kubernetes.io/managed-by" = "terraform"
"app.mintel.com/terraform-module" = "control-plane"
}
}
spec {
privileged = true
allow_privilege_escalation = true
allowed_capabilities = ["*"]
volumes = ["*"]
host_network = true
host_ports {
min = 0
max = 65535
}
host_ipc = true
host_pid = true
run_as_user {
rule = "RunAsAny"
}
se_linux {
rule = "RunAsAny"
}
supplemental_groups {
rule = "RunAsAny"
}
fs_group {
rule = "RunAsAny"
}
}
}
Steps to Reproduce
terraform apply
Expected Behavior
A Pod Security Policy should be applied to the cluster without the defaultAllowPrivilegeEscalation set:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
labels:
app.kubernetes.io/managed-by: terraform
app.mintel.com/terraform-module: control-plane
name: privileged
spec:
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
hostIPC: true
hostNetwork: true
hostPID: true
hostPorts:
- max: 65535
min: 0
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
Actual Behavior
A Pod Security Policy is created with the defaultAllowPrivilegeEscalation set to false
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
labels:
app.kubernetes.io/managed-by: terraform
app.mintel.com/terraform-module: control-plane
name: privileged
spec:
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
defaultAllowPrivilegeEscalation: false
fsGroup:
rule: RunAsAny
hostIPC: true
hostNetwork: true
hostPID: true
hostPorts:
- max: 65535
min: 0
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
Might not be a priority now with PSPs going away in Kubernetes 1.25.
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!