jest
jest copied to clipboard
[Bug]: Generating a coverage report with --runInBand, collectCoverageFrom, and a transformer can mask a failing exit code
Version
29.0.2
Steps to reproduce
-
Create a new project that uses a jest transformer (e.g.
ts-jest). -
Specify a
collectCoverageFrominjest.config.jsin a way that will invoke the transformer when collecting coverage
jest.config.js
module.exports = {
collectCoverageFrom: ["src/**/*.ts"],
preset: 'ts-jest',
};
- Create a
foo.test.jsfile that will fail to run (e.g. including a syntax error). Note, this does NOT need to pass through the transformer. e.g.:
foo.test.js
syntaxError!;
- Create a source file with a name that matches the
collectCoverageFrompattern and will pass through the jest transformer (e.g.bar.ts), and write something that will cause an error, e.g. a syntax error, in that file.
bar.ts
anotherSyntaxErrror!;
-
Run
npx jest --runInBand --coverage -
Run
echo $?to see the exit code, which will be0
Expected behavior
I expect:
- the coverage report to generate and
- for the exit code to be
1.
Actual behavior
The tests fail (due to failure to run) but the coverage report generation silently errors and the exit code for the entire process is, incorrectly,0.
(This, for instance, means that CI marks tests as succeeding).
Additional context
Running npx jest --coverage without --runInBand does not cause this bug and instead renders output such as:
Running coverage on untested files...Failed to collect coverage from /Users/slifty/Maestral/Code/personal/jesttest/src/foo.ts
ERROR: Jest worker encountered 3 child process exceptions, exceeding retry limit
STACK: Error: Jest worker encountered 3 child process exceptions, exceeding retry limit
at ChildProcessWorker.initialize (/Users/slifty/Maestral/Code/personal/jesttest/node_modules/jest-worker/build/workers/ChildProcessWorker.js:211:21)
at ChildProcessWorker._onExit (/Users/slifty/Maestral/Code/personal/jesttest/node_modules/jest-worker/build/workers/ChildProcessWorker.js:396:12)
at ChildProcess.emit (node:events:513:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
Test Suites: 1 failed, 1 total
Tests: 0 total
Snapshots: 0 total
Time: 4 s
Ran all test suites.
Running npx jest --runInBand --coverage only renders:
Running coverage on untested files...
and then the process exits with code 0.
Some other interesting "alternative outcomes":
- If
collectCoverageFromis not specified then coverage is generated as expected and the process returns correct exit codes. - If the tests are able to run then then coverage is STILL not generated and the exit code is always
1regardless of test outcomes.
Environment
System:
OS: macOS 12.4
CPU: (8) x64 Intel(R) Core(TM) i7-8559U CPU @ 2.70GHz
Binaries:
Node: 18.9.0 - ~/.nvm/versions/node/v18.9.0/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 8.19.1 - ~/.nvm/versions/node/v18.9.0/bin/npm
npmPackages:
jest: ^29.0.2 => 29.0.2
Afiyet all
Affter Allah
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days.
I believe this is still an issue -- I can try to dig into the code base to evaluate what's causing the problem.
Same problem here when using --runInBand.
Does someone have a workaround, i cant run in parallel on our server :( ? And I don't want to downgrade, cause of other bugs.
Same problem here. This is still a bug.
I'm also hitting this now. (I know "+1 isn't helpful", but stalebot is watching the clock! 😉 )
Same issue (hi stalebot 🙄)
It has been a few months of reproduction across a bunch of folks and the issue is still marked as needing triage -- does anybody know if there is a mechanism to escalate?
Having looked at no code, here's my hopefully-not-red-herring pet theory in lieu of triage.
I noticed similar behavior setting --maxWorkers=1.
I believe the intention (feel like this was implied in the doc somewhere) is for there to be the main "thread" plus one or more workers. I assume the trouble comes in when the main "thread" is the only one. It thinks it's a worker, and ends for whatever reason with no one left to do the clean-up, reporting, whatever.
If my imagination is in the ballpark, a solution might be to enforce the correct minimum worker count (guessing 2), and/or ensure that when the main "thread" is working as a solitary worker that it's resilient enough to pick back up and do main thread stuff whether its worker work was successful or not (think try/catch/finally sort of thing).
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days.
As far as i know, this is still an issue
Ran into this as well.
ran into this as well
I have the same issue. Hi stalebot! 👋
Reproducing this error is easy. Create several tests in NestJS which write and delete mongodb data. When running several tests in parallel, it creates strange database content situations and tests fail all the time. This proves that in code coverage mode --runInBand and --maxWorkers=1 are being ignored. Tests run in parallel.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days.
I suppose this warrants another bump.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days.
it's unclear to me if the maintainers of this project end up seeing issues like this, but in case they do I suppose I'll still bump.
I can confirm that the bug is still there with the latest version of jest and ts-jest as of May 2023. Also I found that --maxWorkers=2 (without runInBand) fixes the problem.
I have a test case covering this issue: https://github.com/handy-common-utils/dev-dependencies/blob/08fc16a45db1e22882f084f14c3be4acaca1e956/jest/test/fs-utils.spec.ts#LL51C7-L51C67
And the test case would fail in GitHub actions if I remove --maxWorkers=2: https://github.com/handy-common-utils/dev-dependencies/blob/08fc16a45db1e22882f084f14c3be4acaca1e956/jest/test/fixtures/fs-utils/package.json#L7
I suspect that GitHub gives the actions 2 virtual CPU cores, and that triggers the problem if you don't tell Jest to use 2 workers.
In my case it fails silently like described if there is at least one uncovered file which has compile errors. Noticed this while having an incomplete but unused file (a draft) locally. Workaround is obviously to make sure there are no compile errors, but an error message would be helpful of course :)
Same problem here. Setting maxWorkers to 2 did not help.
PASS src/components/SummaryApp.test.tsx
PASS src/components/SummaryApp/CallList.test.tsx
PASS src/components/AudioPlayer/AudioPlayer.test.tsx
Failed to collect coverage from /Users/dpeck/Foo/Box/applications/react/modern/call-review-flow/src/components/DetailApp/CallInfo/index.tsx
ERROR: Jest worker encountered 3 child process exceptions, exceeding retry limit
STACK: Error: Jest worker encountered 3 child process exceptions, exceeding retry limit
at ChildProcessWorker.initialize (/Users/dpeck/foo/Box/applications/react/modern/call-review-flow/node_modules/jest/node_modules/jest-worker/build/workers/ChildProcessWorker.js:170:21)
at ChildProcessWorker._onExit (/Users/dpeck/foo/Box/applications/react/modern/call-review-flow/node_modules/jest/node_modules/jest-worker/build/workers/ChildProcessWorker.js:254:12)
at ChildProcess.emit (node:events:527:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)
Have same issue, any updates on this?
Any updates?
I don't believe anybody in the jest team has seen this issue, unfortunately. If anybody knows anybody that can help escalate it would probably be useful!
- Create a new project that uses a jest transformer (e.g.
ts-jest).
Is the babel-jest transformer causing this problem as well?
I am asking this because babel-jest is the only transformer in the Jest repo. If only some other transformers are causing this issue, that could mean this bug is on their side.
Does not reproduce with babel-jest. For me it errors loud and clear. The exit code is 1.
So this is either setup issue or a problem in a transformer you are using. Simply report the issue in their repos.
If someone is able to reproduce the problem using babel-jest, please provide full reproduction repo.
Thank you for taking a look at this! I'll see if I can create a reproduction case with babel-jest and otherwise will follow up in the appropriate places as advised! (I'll report back here either way)
I was not able to reproduce this with babel-jest so I've opened an issue in ts-jest directly at: https://github.com/kulshekhar/ts-jest/issues/4193
I have the same problem !!!
@FranciscoLagorio would you be up for also commenting at the ts-jest repository's issue? I believe that's the place where the fix would have to be made.