func
func copied to clipboard
Add pod security context from func.yaml
It would be nice to be able to set the pod security context in func.yaml
. I ran into this when I was trying to mount a pvc, and I could not write to the pvc. After a lot of digging, I fould that I needed to set fsGroup
like this:
kubectl patch services.serving/<name> --type merge \
-p '{"spec": {"template": {"spec": {"securityContext": {"fsGroup":1000}}}}}'
This is because the default group is 1000
(I used golang's os/user
to find it). I would prefer to be able to set that from func.yaml
, or have that set automatically to some sane default.
current func.yaml
specVersion: 0.35.0
name: consumer
runtime: go
registry: <redacted>
image: <redacted>
imageDigest: <redacted>
created: 2023-12-13T00:39:05.888786906-05:00
build:
builder: pack
run:
volumes:
- presistentVolumeClaim:
claimName: knative-pc-cephfs
path: /files
deploy:
namespace: default
Here's the currently generated service:
You can see that no securityContext
data is in the podspec.
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
annotations:
dapr.io/app-id: consumer
dapr.io/app-port: "8080"
dapr.io/enable-api-logging: "true"
dapr.io/enabled: "true"
dapr.io/metrics-port: "9092"
serving.knative.dev/creator: kubernetes-admin
serving.knative.dev/lastModifier: kubernetes-admin
creationTimestamp: "2023-12-20T05:20:10Z"
generation: 1
labels:
boson.dev/function: "true"
boson.dev/runtime: go
function.knative.dev: "true"
function.knative.dev/name: consumer
function.knative.dev/runtime: go
name: consumer
namespace: default
resourceVersion: "11510806"
uid: ...
spec:
template:
metadata:
annotations:
dapr.io/app-id: consumer
dapr.io/app-port: "8080"
dapr.io/enable-api-logging: "true"
dapr.io/enabled: "true"
dapr.io/metrics-port: "9092"
creationTimestamp: null
labels:
boson.dev/function: "true"
boson.dev/runtime: go
function.knative.dev: "true"
function.knative.dev/name: consumer
function.knative.dev/runtime: go
spec:
containerConcurrency: 0
containers:
- env:
- name: BUILT
value: 20231220T002010
- name: ADDRESS
value: 0.0.0.0
image: <redacted>
livenessProbe:
httpGet:
path: /health/liveness
port: 0
name: user-container
readinessProbe:
httpGet:
path: /health/readiness
port: 0
successThreshold: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /files
name: pvc-knative-pc-cephfs
enableServiceLinks: false
timeoutSeconds: 300
volumes:
- name: pvc-knative-pc-cephfs
persistentVolumeClaim:
claimName: knative-pc-cephfs
traffic:
- latestRevision: true
percent: 100
Agreed on both suggestions. I'll add this to our roadmap as a "ready to work" item.
Hello @zalsader! I'd like to work on this issue. Could you share some more context about how the func.yaml file is leveraged to generate the service definition? Perhaps pointing to the source code files would help too! This is my first issue and would love to contribute here!
@gouthamhusky Sure, yeah. Looking at existing closed PRs is a good first step. For example, my first contribution to the library was to add pvc to the allowed volume options. I am happy to review any PRs you create. I'm not a maintainer, so those people could provide more guidance.
hey @zalsader after digging into this issue I found out that the podSecurityContext is getting set from here - https://github.com/knative/func/blob/d2fb76c39d5db6301a04ad78108c61e7da949a4b/pkg/k8s/persistent_volumes.go#L121
and the DefaultPodSecurityContext
is defined here - https://github.com/knative/func/blob/d2fb76c39d5db6301a04ad78108c61e7da949a4b/pkg/k8s/security_context.go#L11
func defaultPodSecurityContext() *corev1.PodSecurityContext {
// change ownership of the mounted volume to the first non-root user uid=1000
if IsOpenShift() {
return nil
}
runAsUser := int64(1001)
runAsGroup := int64(1002)
return &corev1.PodSecurityContext{
RunAsUser: &runAsUser,
RunAsGroup: &runAsGroup,
FSGroup: &runAsGroup,
}
}
if we have defined podSecurityContext then why is it not reflecting in service.yaml. like for containerSecurityContext
we do get the set fields like
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
as passed using this function -
func defaultSecurityContext(client *kubernetes.Clientset) *corev1.SecurityContext {
runAsNonRoot := true
sc := &corev1.SecurityContext{
Privileged: new(bool),
AllowPrivilegeEscalation: new(bool),
RunAsNonRoot: &runAsNonRoot,
Capabilities: &corev1.Capabilities{Drop: []corev1.Capability{"ALL"}},
SeccompProfile: nil,
}
if info, err := client.ServerVersion(); err == nil {
var v *semver.Version
v, err = semver.NewVersion(info.String())
if err == nil && v.Compare(oneTwentyFour) >= 0 {
sc.SeccompProfile = &corev1.SeccompProfile{Type: corev1.SeccompProfileTypeRuntimeDefault}
}
}
return sc
}
still if we want to set the podSecurityContext explicitly will have to define new schema in func.yaml followed by creating new struct, setting the value of FSgroup using some function and passing it here - https://github.com/knative/func/blob/d2fb76c39d5db6301a04ad78108c61e7da949a4b/pkg/k8s/persistent_volumes.go#L121
am I on right track? If yes it would be nice if you could assign this issue to me.
lets just say if I define the property PodSecurityContext
in func.yaml should it be in RunSpec or in DeploySpec I have question over that!
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.