Update to run as dspace user
The current dockerfile for dspace-backend runs the backend as root. The user instruction is not transferred to the last stage, after copying to FROM tomcat:9-jdk${JDK_VERSION}.
If i login with docker exec -it ... /bin/bash and do
ps aux | grep java
I see that the owner of the running process is currently root.
Update dockerfile.depencies
- create with a a specific userid, to reuse across stages
- use -m flag on user-add to create home-folder, instead of mkdir and chown
Update dockerfile
- create dspace user on last stage
- add
USER 10001 # dspace uidinstruction to the last stage - Fix warning error consistent casing (as->AS, consistent with FROM)
Updated the merge request after attending a workshop with our IT department on kubernetes deployment. We are testing a kubernetes platform https://elastisys.io/compliantkubernetes which has been set up for us.
Some additional changes and their rationale are:
-
fixed user id with a number above 10000 a user uid above 10000 was also mentioned during the course. See for example https://github.com/hexops-graveyard/dockerfile?tab=readme-ov-file#do-not-use-a-uid-below-10000
-
Using a numeric user to allow the platform to check if the user is actually not running as root
This means that your Dockerfile uses a non-numeric user and Kubernetes cannot validate whether the image truly runs as non-root.
https://elastisys.io/compliantkubernetes/user-guide/safeguards/enforce-no-root/
The docker-compose worked fresh (if volumes weren't created), but failed to start if I had already created volumes for assetstore and logs, using a previous image.
The ownership of these need to be changed to the new owner, from root.
Should maybe also update so that cli images also run as dspace, and not root, to not break the permissions when running maintenance tasks.
The changes look reasonable. @OyvindLGjesdal it's marked as draft, anything else you plan to add, or is that because of the needed ownership change?
I think I the PR is ready for review. I forgot to change state and hadn't noticed the mention. Sorry!
I don't currently have a working instance running, so haven't confirmed if everything works, after the last force-push.
Tried to fix the failing build by adding -m to Dockerfile.dependencies