Turborepo Cache issue with .gitignore'd files
Verify canary release
- [X] I verified that the issue exists in the latest Turborepo canary release.
Link to code that reproduces this issue
https://github.com/bryansoftdev/turbo-gitignore-issue
What package manager are you using / does the bug impact?
pnpm
What operating system are you using?
Mac
Which canary version will you have in your reproduction?
1.13.0
Describe the Bug
"pipeline": {
"build": {
"dependsOn": ["build-config-file"],
"inputs": ["$TURBO_DEFAULT$", "config.json"]
},
"build-config-file": {
"cache": false
}
},
When executing the build task that depends on the build-config-file task, the build task incorrectly receives a cache hit, even though the build-config-file task has dynamically generated a new and different config.json. This results in the build task using an outdated configuration, ignoring the changes made by the build-config-file task.
The issue appears to be caused by Turbo's inability to cache changes in .gitignore files.
Expected Behavior
The expected behavior is that the dependent task gets a cache miss whenever the upstream task generates a new config.json, even if this file is listed in .gitignore.
To Reproduce
-
pnpm install -
pnpm exec turbo build- config.json is first created by build-config-file
-
rm ./apps/app-1/config.json- simulate the repo's initial state
-
pnpm exec turbo build- config.json recreated and has different file contents
- turbo repo shows first config.json's contents even though config.json is listed in inputs & has changed. this should be a cache miss.
Additional context
For now, I removed build-config-file from my build task dependencies, so that I can run the tasks individually & the caching is working as expected.
pnpm exec turbo build-config-file
pnpm exec turbo build
Hi! Thanks for the report. We're having a look at this now and will report back.
Thanks!
@bryansoftdev are you seeing the following for the cache hit on build?
cache hit (outputs already on disk)
Also, will your build task have outputs?
Closing as we haven't heard back. If this is still an issue please re-open!