Container cannot start (when using local and init_if_not_exists db?) when running with uid/gid set
To be able to mount the DLS filesystem at Diamond from the cluster, a pod must be running as non-root, with a uid/gid mapped to a kubernetes user with appropriate permissions on the filesystem to read/write files.
Deploying a known working configuration of tiled into the kubernetes cluster does not work after changing who the pod is running as:
podSecurityContext:
runAsUser: <uid>
runAsGroup: <gid>
securityContext:
runAsUser: <uid>
runAsGroup: <gid>
logs-from-tiled-in-tiled-7d7fbccd59-hnfbn.log
I'm assuming it has something to with creating the sqlite database?
This is otherwise using the default Helm configuration:
authentication:
allow_anonymous_access: false
trees:
- path: /
tree: catalog
args:
uri: "sqlite+aiosqlite:////storage/catalog.db"
writable_storage: "/storage/data"
init_if_not_exists: true
I believe the solution is allowing the Dockerfile to accept args of UID/GID and making a user with those IDs, but I am not sure whether it is required for all of the stages of the Dockerfile?
Are there any requirements for containerised tiled to be running as root?
We run it is non-root in production. I think your theory is plausible.
It's definitely the permissions on the /storage directory that are the issue for startup: I've added a emptyDir overriding that directory while I'm still fiddling with temporary database.
Would be nice to have clearer logging when failing to start due to the db failing to be created.