Fail to hash files
Description
Workflow fails with error:
The template is not valid. ... hashFiles('...') failed. Fail to hash files under directory '/Users/runner/work/...'
The same step works fine both on previous version of macos-15 runners and on ubuntu-24.04.
Platforms affected
- [ ] Azure DevOps
- [x] GitHub Actions - Standard Runners
- [ ] GitHub Actions - Larger Runners
Runner images affected
- [ ] Ubuntu 22.04
- [ ] Ubuntu 24.04
- [ ] Ubuntu Slim
- [ ] macOS 13
- [ ] macOS 13 Arm64
- [ ] macOS 14
- [ ] macOS 14 Arm64
- [x] macOS 15
- [x] macOS 15 Arm64
- [ ] macOS 26 Arm64
- [ ] Windows Server 2019
- [ ] Windows Server 2022
- [ ] Windows Server 2025
Image version and build link
Broken:
macos-15-arm64version20251119.0020: https://github.com/adoroszlai/ozone/actions/runs/19597747914/job/56125544615macos-15(Intel) version20251120.0023: https://github.com/adoroszlai/ozone/actions/runs/19597747914/job/56124691970
Works:
macos-15-arm64version20251104.0104: https://github.com/adoroszlai/ozone/actions/runs/19597747914/job/56124691976macos-15(Intel) version20251103.0112: https://github.com/adoroszlai/ozone/actions/runs/19597747914/job/56125544610
Is it regression?
yes
Expected behavior
hashFiles() calculates hash of files
- https://github.com/adoroszlai/ozone/actions/runs/19597747914/job/56124691976
- https://github.com/adoroszlai/ozone/actions/runs/19597747914/job/56125544610
Actual behavior
Error: The template is not valid. ... hashFiles('...') failed. Fail to hash files under directory '...'
- https://github.com/adoroszlai/ozone/actions/runs/19597747914/job/56125544615
- https://github.com/adoroszlai/ozone/actions/runs/19597747914/job/56124691970
Repro steps
See repro workflow: https://github.com/adoroszlai/ozone/actions/runs/19597747914/workflow
Caused by duplicated lines in index.js: https://github.com/orgs/community/discussions/180160#discussioncomment-15047581
Same on Tahoe
Also affects macos-14
We’re running into this bug at Pion https://github.com/pion/webrtc/pull/3275
As a workaround on one project, we just temporarily disabled caching for macOS:
https://github.com/babashka/fs/pull/170/files#diff-b803fcb7f17ed9235f1e5cb1fcd2f5d3b2838429d4368ae4c57ce4436577f03fR35-R38
Experiencing the same issue.
@Alexey-Ayupov @erik-bershel Not sure who to ping about this problem. If it cannot be fixed soon, please rollback to the previous version, which is known to work OK.
I am also being affected by this: https://github.com/drupal/cms-launcher/actions/runs/19608154195/job/56149953137#step:7:1
This has also broken our CI/CD! Thank you in advance to whoever can get this resolved ASAP.
+1
same on arm64 macos-26: https://github.com/user4223/ticket-decoder/actions/runs/19601406315/job/56133538243
Note that some runner instances may not be affected (yet) - my projects trigger two macos-15 builds in parallel (without any build image version pinning) but may only fail one of them while the other is still able to come up with a valid hash.
Same here! macOS 26
Same Here
Same in https://github.com/swift-dns/swift-endpoint/
appears that even hashFiles with multiple globs is also borked
appears to be related to https://github.com/actions/runner/commit/7df164d2c7c2f5f2207d4a74c273c3c1d183f831 @TingluoHuang
Seeing this as well.
Thank you all for the reports. We have confirmed the issue with the latest macOS image versions and have initiated a rollback to the previous versions.
All images have been rolled back to the previous stable version. Updates have been suspended pending the investigation. The issue will remain open until the upstream repository is fully resolved.
I've implemented a fix for what appears to be the same issue in the Jamulus macOS builds https://github.com/jamulussoftware/jamulus/pull/3565#issuecomment-3568870379
We've identified the cause of the issue on the service side. The team responsible is working to resolve it. Additionally, a working temporary patch has been found on our side, which will be applied before the upcoming update in order to unblock image updates while fixing root cause. A recurrence is not expected in any way.
I'd like to point out that there's no need to apply patches to the user workflows - all affected runners have been restored to a working state.
To raise awareness, I'll keep this issue open until the problem on the service side is fully resolved.
UPD: As a separate note, self-hosted runners built using our code are not susceptible to this problem, as it resides outside the image.
I think this issue can now be closed?
It was mentioned by @erik-bershel https://github.com/actions/runner-images/issues/13341#issuecomment-3572431071 that further work was needed "on the service side" -- no one's posted to confirm that work has been completed.
We are awaiting confirmation.
Root cause was eliminated. Closing as completed.