build-push-action
build-push-action copied to clipboard
Build cache for export step takes ~300s to complete
I followed the docs here to use caching in my actions, but my job takes an extremely long time (300s on average) to prepare and export build cache to GitHub. Is there a reason the caching takes so long?
YAML file:
build-api:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: docker/setup-buildx-action@v1
- name: Login to Github Packages
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push API
uses: docker/build-push-action@v2
with:
context: .
tags: |
ghcr.io/<registry_name>/api:${{ github.event.release.tag_name }}
ghcr.io/<registry_name>/api:latest
build-args: |
APP=api
push: true
cache-from: type=gha
cache-to: type=gha,mode=max
+1 trying to have a very fast build on the CI, and currently all time is spent preparing and exporting the cache ...
@anthonyma94 @Tirke Hard to tell without a repro. Do you have a link to your repo?
@crazy-max it's a private repo unfortunately. Maybe @Tirke can help on that front?
I have the same issue, but I'm using type=registry
, sometimes it just hangs there for over 2 hours. It commonly fluctuates between 3 to 15 minutes.
https://github.com/ZcashFoundation/zebra/runs/5032296281?check_suite_focus=true#step:6:968
Related: https://github.com/docker/build-push-action/issues/259
Have similar issue, but this might be caused by cache size
Repro repo here, following jobs (clean runs, no previous cache):
-
Example Server job run - actual build: 1min 20s, cache export: ~2min 30s (~90s preparing for export) - this is more or less acceptable dockerfile, ci workflow
-
Example Web app job run - actual build: 5min, cache export: ~9 min (~8 min preparing for export) - the cache is useless here, it takes almost twice as long as actual clean build dockerfile, ci workflow
I'm using type=gha, mode=max
, because all my Dockerfiles are multi-staged.
In case of Web app, the reason is probably a huge yarn cache dir
(~1.5 GB) which is copied between stages.
An ideal solution would be to mount this as external volume, since this is actually used in other non-docker workflows too, with actions/cache@v2
, but I have no idea if and how it is possible to share a directory between GHA and Dockerfile stages.
@barthap This was my thinking too (external volume mount), theoretically that shouldn't be a problem or even difficult, but the question is how to turn off the cache export step.
ETA: We use self-hosted Github Actions runners.
Bumping this into 2023, not caching anything sucks and makes GHA much slower than just building locally.
By switching to the S3 cache, I solved all my performance issues / cache limitations with Github Actions.
In my case, it worked well to adjust the mode=min
or mode=max
according to the situation according to the caching hit rate. Since mode=max
is not always good, it is recommended that you experiment in both cases and choose a faster one. Also, giving options such as compression=zstd
did a little to save time.
Yes you might hit GitHub rate-limit when pushing cache blobs to their backend which we don't control. You might also be interested in using timeout
attribute: https://docs.docker.com/build/cache/backends/gha/#synopsis in case the service gets rate-limited.