Cache Not Found After Previous Successful Caching: Windows (latest)
YAML File is fairly straightforward.
...
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@v2
- uses: bazelbuild/setup-bazelisk@v1
- name: Cache Build
uses: actions/cache@v2
env:
cache-name: build-cache
with:
path: |
~/_bazel_runneradmin/*/action_cache
~/_bazel_runneradmin/*/execroot
~/_bazel_runneradmin/*/external
~/_bazel_runneradmin/*/server
key: ${{ runner.os }}-${{ env.cache-name }}
...
In one repository, the cache works just fine, albeit slow (~ 4.5 mins to unravel ~1GB). I went to set up a second repository with the exact same code with the following output:
Post job cleanup.
C:\Windows\System3[2](https://github.com/J-B-Blankenship/[private_repository]/runs/5135821556?check_suite_focus=true#step:11:2)\tar.exe --posix -z -cf cache.tgz -P -C D:/a/[private_repository]/[private_repository] --files-from manifest.txt
Cache Size: ~952 MB (9984257[3](https://github.com/J-B-Blankenship/[private_repository]/runs/5135821556?check_suite_focus=true#step:11:3)[4](https://github.com/J-B-Blankenship/[private_repository]/runs/5135821556?check_suite_focus=true#step:11:4) B)
Cache saved successfully
Cache saved with key: Windows-build-cache
The next pull request rolls in:
Run actions/cache@v[2](https://github.com/J-B-Blankenship/[private_repository]/runs/5135821556?check_suite_focus=true#step:4:2)
with:
path: ~/_bazel_runneradmin/*/action_cache
~/_bazel_runneradmin/*/execroot
~/_bazel_runneradmin/*/external
~/_bazel_runneradmin/*/server
key: Windows-build-cache
env:
cache-name: build-cache
Cache not found for input keys: Windows-build-cache
The only thing I can think of is that during the setup of the repository, the default branch was set to "main". I manually reverted this back to the traditional "master". All other repository settings are identical.
Any help would be greatly appreciated, including the rather long "Run actions/cache@v2" step.
I am experiencing this same problem on Linux. It is a mix of GitHub hosted and Self-Hosted. The first job is GitHub hosted:
Post job cleanup.
/usr/bin/tar --posix --use-compress-program zstd -T0 -cf cache.tzst -P -C /home/runner/work/devops/devops --files-from manifest.txt
Cache Size: ~47 MB (48897358 B)
Cache saved successfully
Cache saved with key: 1-devops:4e26e8165ed562806cc4090908f37d84cb50ca58
The next job runs on a self-hosted runner:
Run actions/cache@v2
with:
key: 1-devops:4e26e8165ed562806cc4090908f
path: /opt/actions-runner/_work/devops/devops/production.tar.gz
env:
DOCKER_BUILDKIT: 1
Cache not found for input keys: 1-devops:4e26e8165ed562806cc4090908f37d84cb50ca58
We have another repo that uses this same workflow (cache on GitHub hosted, restore on self-hosted) and it is working without trouble.
I found a fix for this for me.
In my workflow, I was using ${{ github.workspace }}/prod.tar.gz as the cache path. I used the exact same string on GH hosted runner (where we create the artifact and cache it) and on the self-hosted runner (where we restore cache to deploy on internal server). On the GH hosted server, this resolves to something like /home/runner/work/<REPO>/<REPO>/prod.tar.gz while on the GH hosted runner, and /opt/actions-runner/_work/<REPO>/<REPO>/prod.tar.gz on the self-hosted runner.
I changed the path to simply prod.tar.gz in all my uses, and it it resolved the issue. Apparently the string you provide for the cache path influences how it's handled.
Note: The cache key did not need changing. It is dynamically based of commit SHA, and it has been the whole time. The path value just needed to match.
I'm running into a similar issue on Linux.
In the first run the cache is loaded correctly:
Run actions/cache@v2
with:
path: /ccache
key: ccache-build-firmware
env:
CMAKE_C_COMPILER_LAUNCHER: ccache
CMAKE_CXX_COMPILER_LAUNCHER: ccache
CCACHE_DIR: /ccache
/usr/bin/docker exec ec98d0e948a95a1e266358a4fdfb7c7a81c61ded5bb5cfe455bc06909cd1ffbc sh -c "cat /etc/*release | grep ^ID"
Received 27393810 of 27393810 (100.0%), 75.7 MBs/sec
Cache Size: ~26 MB (27393810 B)
/usr/bin/tar -z -xf /__w/_temp/60932023-7d89-4e55-8ef0-b7c9b1230154/cache.tgz -P -C /__w/ShrapnelMonorepo/ShrapnelMonorepo
Cache restored successfully
Cache restored from key: ccache-build-firmware
Post job cleanup.
/usr/bin/docker exec ec98d0e948a95a1e266358a4fdfb7c7a81c61ded5bb5cfe455bc06909cd1ffbc sh -c "cat /etc/*release | grep ^ID"
Cache hit occurred on the primary key ccache-build-firmware, not saving cache.
https://github.com/ShrapnelDSP/ShrapnelMonorepo/actions/runs/1833708418
In the second run it is not found:
Run actions/cache@v2
/usr/bin/docker exec 3fe703ec11a78a0701ab3998372d4d4f910b6e95b1e5bd3c70d149bdef9c95b5 sh -c "cat /etc/*release | grep ^ID"
Cache not found for input keys: ccache-build-firmware
Post job cleanup.
/usr/bin/docker exec 3fe703ec11a78a0701ab3998372d4d4f910b6e95b1e5bd3c70d149bdef9c95b5 sh -c "cat /etc/*release | grep ^ID"
/usr/bin/tar --posix -z -cf cache.tgz -P -C /__w/ShrapnelMonorepo/ShrapnelMonorepo --files-from manifest.txt
Cache Size: ~26 MB (27484560 B)
Cache saved successfully
Cache saved with key: ccache-build-firmware
https://github.com/ShrapnelDSP/ShrapnelMonorepo/actions/runs/1833751801
My issue is probably due to this restriction: https://docs.github.com/en/actions/advanced-guides/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache
Yes @Barabas5532, yours seems to be that the first run was on the ccache branch, and the second run was on master branch. master doesn't get access to ccache cache, but ccache does get access to master branch (unless ccache has a cache already, which it does in your case). So it is behaving as described in the docs you linked.
@J-B-Blankenship not sure from the issue description but if you are trying to use same cache across different PR branches then that is not allowed.
@J-B-Blankenship not sure from the issue description but if you are trying to use same cache across different PR branches then that is not allowed.
I have not touched the project for a while, but both projects are set up the same way. This was a pull request into master, so not a commit on a separate branch trying to use the cache on master. This week I will fiddle with it to try to identify the issue. Everything is on GitHub's server, not local.
https://github.com/Goooler/DemoApp/runs/5330136144?check_suite_focus=true
Same issue, my config is:
build:
name: Build
strategy:
matrix:
os: [ ubuntu-latest, windows-latest, macos-latest ]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v2
- uses: actions/setup-java@v2
with:
distribution: 'zulu'
java-version: 17
- uses: actions/cache@v2
with:
path: |
~/.gradle/caches
~/.gradle/wrapper
key: ${{ runner.os }}-gradle-${{ hashFiles('**/**.gradle', '**/**.gradle.kts', '**/gradle/wrapper/gradle-wrapper.properties', '**/buildSrc/src/main/kotlin/**.kt') }}
- name: Build
run: ./gradlew app:assemble
- uses: actions/upload-artifact@v2
if: matrix.os == 'ubuntu-latest'
with:
path: app/build/*.apk
I get the same issue both of self hosted and not (windows-latest in both cases)
What happens to me is, cache gets restored perfectly fine if I use the same tag name (delete old tag, make new tag with same name) or by re-running the same job. In this case it all works fine and cache gets restored.
But as soon as I make a new release with a different tag (upticking a version) even with the same cache keys it fails to find it. I have a feeling the tag is somehow related but I did not find any documentation stating this was a limit
L
the same
- name: Enable Cache
id: cache_step
uses: actions/cache@v3
with:
key: ${{ runner.os }}-${{ hashFiles('yarn.lock') }}
path: node_modules
Previous action
Post job cleanup.
/usr/bin/tar --posix --use-compress-program zstd -T0 -cf cache.tzst --exclude cache.tzst -P -C /home/runner/work/raketa-service-boilerplate/raketa-service-boilerplate --files-from manifest.txt
Cache Size: ~61 MB (63713304 B)
Cache saved successfully
Cache saved with key: Linux-ca27a667ffbf385fda6d162a93a030f18c2ee0431cd095ad8dfd1083d2a72b35
and in the next action
Run actions/cache@v3
with:
key: Linux-ca27a667ffbf385fda6d162a93a030f18c2ee0431cd095ad8dfd1083d2a72b35
path: node_modules
Cache not found for input keys: Linux-ca27a667ffbf385fda6d162a93a030f18c2ee0431cd095ad8dfd1083d2a72b35
Version 2 is working fine!
@J-B-Blankenship @budarin @Goooler if these workflows are running on pull request trigger then they will run on the merge branch and hence won't be able to access cache created by other PRs. Can you please check if that is the issue here?
In my case the workflow is running on pull request
on:
pull_request:
but in my case the problem was solved by simplifying calculation the key from:
key: ${{ runner.os }}-${{ hashFiles('yarn.lock') }}
to:
key: ${{ hashFiles('yarn.lock') }}
now it works fine
Running through this old thread, I can see two things which probably need to be called out clearly in documentation.
- The cache uniqueness is also identified by the
pathused. Same cache cannot be used ifpathis different across workflows/runs. - PR check runs won't share the cache unless the cache is created by the base branch for PRs
I stumbled across this, thank you for opening it. I was banging my head on the wall for a while.
This affects the ability to use what should be shared common paths. For example, I cannot use:
path: ~/something
(or $HOME/something, etc.)
If the $HOME will vary across runners, as that gets resolved to the full path. The purpose of $HOME as a variable is to abstract those out and have a path that resolves to the right place across environments.
In that case, how should I use the cache if I want to save ~/something across multiple runners where $HOME is different?
README is updated to add more details about how version is computed https://github.com/actions/cache/pull/971