travis-cookbooks
travis-cookbooks copied to clipboard
[Discussion] Use Dockerfiles to build Xenial images
@soulshake came up with this idea a while ago and I thought it's worth discussing it, especially since we've just started work on Xenial - trying this out now would mean less duplicated work should we decide to move to Dockerfiles in the future.
Here are the bullet points we came up with:
Pros:
- perfect moment for transition since we're just starting Xenial work
- simpler image baking process
- easier image testing process
- higher visibility into what gets included in each image
- less knowledge overhead, since we'll not depend on Chef anymore and on Packer to a lesser extent
- more consistency between AWS/GCE builds
- better docker image caching, since building with Dockerfiles results in layers, unlike building docker images via Packer
Cons:
- the files might get too big and too hard to manage
- I'm hoping this will encourage us to simplify our image baking process
- content duplication inside the docker files
- this might be mitigated by building intermediate base images (which should also help with host creation times)
Risks:
- this changes the way we currently run builds on GCE
- we can deploy one container per VM to start with (so we don't have to worry about enabling sudo)
- Ideally, I'd like to keep our current tests and run them on the new images, but am not sure if this is possible. Replacing the tests would be a significant overhead.
- spending too much time with this before realising it might be unfeasible
- I would recommend dedicating a limited amount of time (~2 weeks) to try it out
We've discussed this further as part of the weekly team sync and decided to spend some time on a proof of concept. For this purpose, I've created https://github.com/travis-ci/build-env-linux to keep the new Dockerfiles.
After giving this a bit more thought, I've realized that there is at least one case where we do need to keep running fully virtualized: builds that require service: docker
.
We could change our approach and use Dockerfiles for sudo: false
builds, but I think that increases the chance of introducing differences between the two infras by mistake, so I'm afraid this might be it for this particular idea 💀
@bogdanap I see a couple of possible paths forward here.
Bind-mount the Docker socket
Bind-mounting /var/run/docker.sock
lets you control Docker from within Docker, which is a great solution for most cases IMO. There's not a lot you can't do from within a container with full privileges.
But there might be some edge cases where running in a container (fully privileged, with root) just isn't enough. I haven't found any yet, but they're probably out there.
There is one possible drawback when bind-mounting the docker socket: a script that runs e.g. docker ps -q | xargs docker kill
would cause the environment to kill itself (and perhaps other components that you would run next to it in containers)
That being said, this could be:
- documented
- perhaps mitigated with an authorization plugin
Docker image -> VM image
The idea here would be to make a Docker image bootable, docker export
, then import it as a VM image into GCE.
Since docker images share the kernel with the host, you need to load a bunch of kernel modules, etc. I have initrd
and initramfs
in my notes from a convo about this at DockerCon, but I haven't tried it myself yet.
chroot tricks
"Another approach would be, instead of turning a container image into a VM image, to inject the container image into a pre-existing (minimal) VM image and use some chroot trick to hand off control to the container image.
I believe that Debian even has some decent tools to manage chroots (i.e. mount the right pseudofs in the right place etc)."
Sources
Thanks @jpetazzo and @ewindisch for input!