docker-compose-buildkite-plugin
docker-compose-buildkite-plugin copied to clipboard
Image cleanup after succes build
Hi all,
We are running buildkite with this docker-compose plugin for a while now and really like it. However we have an issue with disk space all the time. After some research everytime a lot of 'buildkite_buildhash_app' images are created. We fix it now by running docker images prune --all
but that also removes the last image which makes the build step a lot longer. It would be really great if we can cleanup all images except the last one (to keep the cache and spare build time) at the end of our pipeline. Is there already a way to do this or should this be implemented?
I would like to hear if more people have this problem and how we could solve this without removing all images.
The docker-compose plugin cleans up containers, networks and volumes in its pre-exit
hook, but we don't do anything about image cleanup because generally folks want warm caches.
In our AWS Elastic Stack we have some hooks that clean up images when disk is running low, which works pretty well:
https://github.com/buildkite/elastic-ci-stack-for-aws/blob/master/packer/conf/buildkite-agent/hooks/environment#L19-L30 https://github.com/buildkite/elastic-ci-stack-for-aws/blob/master/packer/conf/bin/bk-check-disk-space.sh
Hope that helps!
Thanks for your fast response. This can work but in that case it is still possible that some caches gets removed which we still need. Of course we want to keep the caches warm but for that we only need the last image of each pipeline and each branch. We don't need the old images right? So in this example:
- Project A
- master
- build D
- build A
- some_feature
- build C
- build B
We only need to keep the images from build D and C, since build A and B are older so are never used for caching. Or do I interpret the docker caching system wrong?
Sorry for the delay here but the layering nature of docker would actually mean that - as long as you are making the best use of it - successive images only have small differences on disk usage as most of the layers that make the image up would be shared. So the amount of space saved shouldn't be that big (in an idealized - and probably non-existent - scenario).
That said, you should be able to clean up images as often as you'd like in your agents and use the cache-from
option in builds that would make sure that the agent does a docker pull
of a series of images to make sure layers are pre-populated so steps can be skipped when building new images.
If that is not exactly your use-case, please re-open this ticket so that we can investigate if this plugin can best accommodate your scenario.