ts-patch
ts-patch copied to clipboard
Lock issues when running unit tests with Jest
Whenever I run my unit tests with Jest (with --coverage
) and ts-patch/compiler
set (live compiler), I get the following errors:
Loading module ts-patch/compiler failed with error: Could not acquire lock to write file. If problem persists, run ts-patch clear-cache and try again.
or
Loading module ts-patch/compiler failed with error: EPERM: operation not permitted, open '{PATH}.cache\ts-patch\locks\f38788ba0e2006162426c2f9961a166c.lock'
Not sure how to solve it, sometimes ts-patch cache-clear works, sometimes not.
The workaround I have for now is removing the ts-patch/compiler
from the Jest configuration and just run ts-patch install
before any Jest command.
Hi Victor. Thanks for the report!
I'd be glad to take a look. Can you make a reproduction repository so I can see what's happening?
I see this issue intermittently in my work repository when building all our packages on Windows. We have ~160 packages and use lerna
to execute gulp build
in each package in order (with parallelization). The build
task calls node root\node_modules\ts-patch\bin\tspc.js --project tsconfig-esm.json --skipLibCheck --sourceRoot root\modules\package-name
.
I realized while writing this that the issue could be because we run 2 builds in parallel:
- With TypeScript's tsc that produces commonjs for node.js
- With ts-patch's tspc that produces ESM for browser with some transformations
Thanks for the added detail, @SlyryD.
I realized while writing this that the issue could be because we run 2 builds in parallel:
Interestingly, that's the problem that locks are supposed to solve.
- https://github.com/nonara/ts-patch/blob/master/projects/core/src/utils/file-utils.ts
Analysis
Loading module ts-patch/compiler failed with error: Could not acquire lock to write file. If problem persists, run ts-patch clear-cache and try again.
This error is most likely due to one of two reasons:
1. Process terminated forcefully before lock is released
Locks are wrapped in a method similar to "context handlers" in python. It uses finally
to release the file, so even ctrl-c (SIGINT) should usually execute the finally clause.
If it was forcefully exited somehow from jest at just the right moment, it could potentially leave an artifact behind, but it seems less likely.
I mention this only because @v-beltran mentioned that sometimes clearing cache works. If you can determine that you have lockfile artifacts that persist after the application runs, that would help me get an idea of what's going on.
2. The wait time isn't long enough
We have lockfile wait time set to 2 seconds. This seems fairly high, but maybe there's an issue if its multithreading processes with jest. It could make sense to allow this to be configured via an env var, and maybe bump the default up a little higher.
Loading module ts-patch/compiler failed with error: EPERM: operation not permitted, open '{PATH}.cache\ts-patch\locks\f38788ba0e2006162426c2f9961a166c.lock'
This one is unusual.
@v-beltran I assume you put {PATH}
in yourself and omitted the real path, or is that the exact copy of the error?
The main thing I can think here is that perhaps we're getting a collision by another thread right between the waitForLockRelease
and fs.writeFileSync
.
This would normally seem a little unusual, but since jest spawns up all its threads at once, I could see this. I originally designed the locking to mitigate another issue which was much less likely for collision. This was before we had the live compiler, which amplifies the likelihood of race issues considerably.
Solutions
I think what we can do for now is:
- Increase default wait time to 4_000
- Add an env var override for wait time
- Mention the env var override in the error message
- Mitigate lock file acquisition collision (between the two calls mentioned above) 👇
Example:
let attempts = 0;
const maxAttempts = 3;
// Re-attempt 3x
while (attempts < maxAttempts) {
try {
waitForLockRelease(lockFilePath);
fs.writeFileSync(lockFilePath, '');
break;
} catch (error) {
if (error.code === 'EPERM') {
attempts++;
if (attempts >= maxAttempts) {
throw error;
}
} else {
throw error;
}
}
}
Short of any further detail that might help shed light on what's happening (or a repro), this is probably the best strategy for now
The issue doesn't appear with a persistent patch, so I'll try that out.