fabric8
fabric8 copied to clipboard
Issue with Jenkins Kubernetes Plugin
Hello, after some testing with the Fabric8 Jenkins we found a quite strange issue with the Jenkins Kubernetes Plugin. It seems that the plugin mounts always the path /home/jenkins/workspace from the Host into the jnlp-client container. For me, it would make more sense the $JENKINS_HOME directory of the Jenkins Container would be mounted here. Also, for us it raised an issue since the Home Directory Drive is very small and because of that a lot of space is occupied, so it would be good if the same physical storage as for the $JENKINS_HOME in the Jenkins Container would be used. Is there a reason why a new path on the host system is mounted anyway? Best regards Eric
so this issue looks to be related to the jenkins-docker image and the use of the Kubernetes Plugin: https://github.com/fabric8io/jenkins-docker/blob/master/config/config.xml#L27
we can probably disable that now right?
Wouldn't it be possible to use a volume container for the /home/jenkins in this case? Our problem is as well that all the build data stays on the host system after the build, which is a potential security threat.
I have just noticed this issue in our Fabric8 cluster.
This seems to me quite a high priority bug since the jenkins workspace volume (PV) does not work currently as intended (I believe).
For better understanding the current status as I could trace it.
- Jenkins pod mounts the workspace pv at /var/jenkins_home/workspace (same as $JENKINS_HOME/worspace)
- this pv is used for checkout of Jenkinsfile projects only
- jenkins kubernetes plugin (and build pod) use /home/jenkins/workspace from local host as mount for the used workspace. It shows in the build log:
Running on 60594bc7b77 in /home/jenkins/workspace/workspace/my-test-project
- strange enough workspace dir is even used twice in the path.
- this local pod dir leads to errors and issues, for example when a project is deleted and created new or the host simply does not have enough disk space
- the workspace is also not shared between the builds
- not even sure if it works for building on multiple nodes, as the checkout stage is outside of buildpod, so it could happen on another host in its local directory and be missing in the buildpod then...
- anyway it should be using the workspace PV here