Workingdir does not give write access to the group
Expected Behavior
Workingdir should be created with write access for the group. Images with the user as nonroot should be able to run commands that require write access in the workingdir.
Actual Behavior
Workingdir is created with no write access for the group, even though the group has write access for the workspace itself.
(For images with a nonroot user) Running a command like mkdir in the workingdir will return a Permission denied error.
Steps to Reproduce the Problem
- TaskRun
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: workingdir-example
spec:
podTemplate:
securityContext:
fsGroup: 65532
workspaces:
- name: output
emptyDir: {}
params:
- name: subDirectory
value: /testDir/
taskSpec:
steps:
- name: run-command
image: urvip/workingdir-example:latest
workingdir: $(workspaces.output.path)/$(params.subDirectory)
script: |
#!/bin/sh
set -eu
ls -l ../
ls -l ../../
- Output
total 4
drwxr-sr-x 2 root nonroot 4096 Mar 25 16:04 testDir
total 4
drwxrwsrwx 3 root nonroot 4096 Mar 25 16:04 output
Additional Info
-
Kubernetes version:
Output of
kubectl version:
Client Version: v1.28.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3-gke.1286000
-
Tekton Pipeline version:
Output of
tkn versionorkubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'
Client version: 0.32.0
Pipeline version: v0.52.0
Hey @urvi-p, thanks for the issue. This is the "normal" kubernetes behavior, tektoncd/pipeline doesn't do any magic with workingdir (and does very little to none with workspaces). When you specify a workingdir that doesn't exists (which is the case, $(workspaces.output.path)/$(params.subDirectory) doesn't "yet" exists, then it's kubernetes/kubelet/the container runtime that creates that folder, and there is little to no control for tekton to do anything.
@vdemeester I assumed that it was the workingdirinit that created the subpath: https://github.com/tektoncd/pipeline/blob/main/cmd/workingdirinit/main.go
Could we update the permissions here higher than 755 I wonder.
But my understanding might be wrong here.
As @chitrangpatel noted it looks like tekton and not k8 is creating the workingdir in cases where it's a relative path or starts with /workspace.
For example, if I look at the permissions on both the workspace (using an emptyDir) and the directory created by Tekton, they are different:
drwxrwxrwx 3 root root 4096 Apr 25 21:38 /workspace/<some_workspace>
drwxr-xr-x 2 root root 4096 Apr 25 21:38 /workspace/<some_workspaces>/<some_directory>
Should the created directories just have the same permissions as their 'parent' workspace?
I looked into this. Turns out that we have two volumeMounts. An implicit Volume Mount that Tekton provides at /workspace. Also, unfortunately, user workspaces are mounted under the same path /workspace/<user-workspace> which is leading to this confusion.
The code https://github.com/tektoncd/pipeline/blob/main/cmd/workingdirinit/main.go here is updating and creating directories under the implicit Volume Mount because only that volumeMount is mounted in the workingDirInit container. The user's workspace is not. So while we could update the code here, it actually wouldn't affect the user's workspace like Vincent suggested above.
We could do the following though:
- Also mount the user workspaces to the
workingDirInitcontainers. That way, when this logic runs (e.g. anything under/workspace, create a directory), we would be creating directories directly under the user's workspace like Kubernetes is doing. However, we can take more control here and create the directory with the same permission as the parent directory. Obviously, if the directory already exists (e.g. from a PVC), we don't do anything. I think that provides an easier and more intuitive user experience. WDYT?
cc @tektoncd/core-maintainers looking for feedback here.
@vdemeester I'm bringing this back on our radar. Please see my comment above. I think we can fix this and we probably should. WDYT?
- Also mount the user workspaces to the
workingDirInitcontainers. That way, when this logic runs (e.g. anything under/workspace, create a directory), we would be creating directories directly under the user's workspace like Kubernetes is doing. However, we can take more control here and create the directory with the same permission as the parent directory. Obviously, if the directory already exists (e.g. from a PVC), we don't do anything. I think that provides an easier and more intuitive user experience. WDYT?
What happens if the workingDir is not under /workspace ? Would we always have a "volume" for the workingDir ? It might have weird side effect I think.
- Also mount the user workspaces to the
workingDirInitcontainers. That way, when this logic runs (e.g. anything under/workspace, create a directory), we would be creating directories directly under the user's workspace like Kubernetes is doing. However, we can take more control here and create the directory with the same permission as the parent directory. Obviously, if the directory already exists (e.g. from a PVC), we don't do anything. I think that provides an easier and more intuitive user experience. WDYT?What happens if the
workingDiris not under/workspace? Would we always have a "volume" for theworkingDir? It might have weird side effect I think.
I think what happens even now is that the workingDir init tries to create the path within /workspace. Coincidentally, the user defined workspaces are also mounted by default under /workspace/<ws-name>. So the workingDir init creates the subfolders under /workspace/ws-name/... however it is only creating the path in the implicit volume /workspace because the path /workspace/ws-name is overwritten by the user-defined workspace's volume mount. My point is that it is a bit broken right now anyway. We should clean it up.
I think if we mount all the volumes that the Task uses have into the workingDirInit containers and not limit it to just /workspace paths then this will always work. The only time this will not work is if it's not a volume mount at all. In that case, the behavior will be exactly what it is today (i.e. Kubernetes handles it). I think inheriting the permissions of the parent-dir should be the default behavior wherever possible.