docs
docs copied to clipboard
Please provide an example on how to save and load docker images to/from cache
Code of Conduct
- [X] I have read and agree to the GitHub Docs project's Code of Conduct
What article on docs.github.com is affected?
https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows
What part(s) of the article would you like to see updated?
In the examples, there is only one example of how to use cache with npm. Also, another example lists cache keys.
Can you please add an example of how to use docker save
and docker load
to cache large docker images? This hugely reduces the network overhead.
Additional information
To boost the performance of an action, we need to pull docker images. Those images are big. Sometimes 2 or 3 GB and we run our actions many times. This means 20 or 30 GB of data needs to be exchanged per day.
We can't write the YAML necessary to use cache with docker.
Thanks for opening this issue. A GitHub docs team member should be by to give feedback soon. In the meantime, please check out the contributing guidelines.
@Nefcanto Thank you for opening this issue! I'll get this triaged for review ✨
Hey @Nefcanto, thank you so much for opening this issue and providing helpful context!
You or anyone else is welcome to open an issue to add a Docker example to the doc. It is helpful if you can also link to a successful workflow run that uses the example code when you submit the PR.
Thank you! ⚡
@SiaraMist, thanks for replying. If I could do it, I would not ask for it in the docs. The reason I asked is that I can not do it and I can't find a successful example online. I even asked ChatGPT and Gemini and they could not help. Some people even argue that it's by intention and GitHub does not want images to be cached.
@Nefcanto it's quite doable although I'm not at all sure caches are the right tool for your use case.
docker save
dumps a container image to a tarball and docker load --input=foo.tar
loads an image from such a tarball. Github cache can cache any data you like, but is limited to 10 GB per repo.. For larger container images, you might be better off using Github Container Registry..
Here's the proof-of-concept that I threw together for using caches:
- docker-save.yml
on:
workflow_dispatch:
jobs:
save_image:
runs-on: ubuntu-latest
steps:
- name: Pull the busybox image
run: docker pull busybox
- name: Save the busybox image
run: |
docker save busybox > busybox.tar
gzip busybox.tar
- name: Cache the tarball
uses: actions/cache@v4
with:
path: busybox.tar.gz
key: my_static_key
- docker-load.yml
on:
workflow_dispatch:
jobs:
load_image:
runs-on: ubuntu-latest
steps:
- name: Check out the cache
uses: actions/cache@v4
with:
path: busybox.tar.gz
key: my_static_key
- name: Unzip the tarball
run: gunzip busybox.tar.gz
- name: Load the container image
run: docker load --input=busybox.tar
And here's a log from docker-load
to prove that both worked:
2024-05-31T22:05:12.4919516Z Current runner version: '2.316.1'
2024-05-31T22:05:12.4942044Z ##[group]Operating System
2024-05-31T22:05:12.4942813Z Ubuntu
2024-05-31T22:05:12.4943177Z 22.04.4
2024-05-31T22:05:12.4943462Z LTS
2024-05-31T22:05:12.4943865Z ##[endgroup]
2024-05-31T22:05:12.4944253Z ##[group]Runner Image
2024-05-31T22:05:12.4944639Z Image: ubuntu-22.04
2024-05-31T22:05:12.4945108Z Version: 20240526.1.0
2024-05-31T22:05:12.4946088Z Included Software: https://github.com/actions/runner-images/blob/ubuntu22/20240526.1/images/ubuntu/Ubuntu2204-Readme.md
2024-05-31T22:05:12.4947540Z Image Release: https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20240526.1
2024-05-31T22:05:12.4948454Z ##[endgroup]
2024-05-31T22:05:12.4948847Z ##[group]Runner Image Provisioner
2024-05-31T22:05:12.4949319Z 2.0.370.1
2024-05-31T22:05:12.4949701Z ##[endgroup]
2024-05-31T22:05:12.4950626Z ##[group]GITHUB_TOKEN Permissions
2024-05-31T22:05:12.4952317Z Contents: read
2024-05-31T22:05:12.4952832Z Metadata: read
2024-05-31T22:05:12.4953440Z Packages: read
2024-05-31T22:05:12.4953998Z ##[endgroup]
2024-05-31T22:05:12.4956979Z Secret source: Actions
2024-05-31T22:05:12.4957675Z Prepare workflow directory
2024-05-31T22:05:12.5573115Z Prepare all required actions
2024-05-31T22:05:12.5748903Z Getting action download info
2024-05-31T22:05:12.7548943Z Download action repository 'actions/cache@v4' (SHA:0c45773b623bea8c8e75f6c82b208c3cf94ea4f9)
2024-05-31T22:05:13.0263036Z Complete job name: load_image
2024-05-31T22:05:13.1369805Z ##[group]Run actions/cache@v4
2024-05-31T22:05:13.1370938Z with:
2024-05-31T22:05:13.1371565Z path: busybox.tar.gz
2024-05-31T22:05:13.1372463Z key: my_static_key
2024-05-31T22:05:13.1373211Z enableCrossOsArchive: false
2024-05-31T22:05:13.1373924Z fail-on-cache-miss: false
2024-05-31T22:05:13.1374871Z lookup-only: false
2024-05-31T22:05:13.1375639Z save-always: false
2024-05-31T22:05:13.1376271Z ##[endgroup]
2024-05-31T22:05:15.0862596Z Received 0 of 2159992 (0.0%), 0.0 MBs/sec
2024-05-31T22:05:15.1263248Z Cache Size: ~2 MB (2159992 B)
2024-05-31T22:05:15.1265368Z [command]/usr/bin/tar -xf /home/runner/work/_temp/94aa4aae-bed8-448b-858a-fdd40951801e/cache.tzst -P -C /home/runner/work/Testy/Testy --use-compress-program unzstd
2024-05-31T22:05:15.1388801Z Cache restored successfully
2024-05-31T22:05:15.3269145Z Cache restored from key: my_static_key
2024-05-31T22:05:15.3583222Z ##[group]Run gunzip busybox.tar.gz
2024-05-31T22:05:15.3583802Z [36;1mgunzip busybox.tar.gz[0m
2024-05-31T22:05:15.3658571Z shell: /usr/bin/bash -e {0}
2024-05-31T22:05:15.3659042Z ##[endgroup]
2024-05-31T22:05:15.4233962Z ##[group]Run docker load --input=busybox.tar
2024-05-31T22:05:15.4234535Z [36;1mdocker load --input=busybox.tar[0m
2024-05-31T22:05:15.4290454Z shell: /usr/bin/bash -e {0}
2024-05-31T22:05:15.4290899Z ##[endgroup]
2024-05-31T22:05:15.5593024Z Loaded image: busybox:latest
2024-05-31T22:05:15.5714332Z Post job cleanup.
2024-05-31T22:05:15.6917970Z Cache hit occurred on the primary key my_static_key, not saving cache.
2024-05-31T22:05:15.7200824Z Cleaning up orphan processes
@APCBoston, I got it. Though you manually cached the images, this was a creative approach. In fact, you cached a bunch of compressed files. And you used the docker
command to create those files.
@APCBoston, I got it. Though you manually cached the images, this was a creative approach. In fact, you cached a bunch of compressed files. And you used the
docker
command to create those files.
I don't think of it as particularly creative... you asked for an example of how to use docker save
and docker load
with the Github Actions Cache. And while you refer to it as "manual" caching... that's how the cache
action works...
Import Saved Docker images
$ docker load < rook-ceph.tar
Getting image source signatures
Copying blob 5f91d4a491de: 829.12 MiB / 834.23 MiB
Copying blob 5f91d4a491de: 834.23 MiB / 834.23 MiB
Copying config dd85e44a0f8b: 419 B / 419 B
Writing manifest to image destination
Storing signatures
dd85e44a0f8bcf876749eabaeae5924ab6778b5ce191b37e08d4874982d8a601