vitest
vitest copied to clipboard
Timeout abort can leave process(es) running in the background
Describe the bug
Originally I was running 0.29.2
& ran into this frequently but since upgrading to 0.29.7
I have only ran into it a few times but in summary I have a monorepo with lots of packages & lots of tests. On a rare occasion when I run it it would get hung / stuck & then prompt a close timed out after 10000ms
, followed by a Failed to terminate worker while running [PATH_TO_TEST]
.
When this happens I, like many, would try to abort the test using ^C
. However, when doing so Vitest seems to leave the hung Node.js process running with the process peaked at 100% CPU usage. I mainly noticed this issue after my Macbook battery life dropped from 100% to 9% in 2 hours & realised a handful of Node.js processes were left running from the night before.
I assume this is because either Vitest is not correctly cleaning up processes when the CLI gets hung OR because the process is hung Vitest is not able to kill the process.
Reproduction
Reproduction has been relatively inconsistant so far but I was able to record the issue in a few steps:
- Run my tests
- Usually a single test gets hung & blocks finishing running the tests
- Logging gets staggered
-
close timed out after 10000ms
gets printed to the console - Abort the test using
^C
- CLI will exit, Node.js process will remain running at 100% CPU usage until the system is restarted
If you want to clone the source to run this exact how I am, I just open sourced the project: nurodev/untypeable
https://user-images.githubusercontent.com/4991309/227647367-e83641b1-57ca-4f95-bcae-3a45b71c96b2.mp4
System Info
System:
OS: macOS 13.0.1
CPU: (10) arm64 Apple M1 Pro
Memory: 1.41 GB / 16.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 18.12.1 - ~/Library/Caches/fnm_multishells/4286_1679694469648/bin/node
Yarn: 1.22.19 - ~/Library/Caches/fnm_multishells/4286_1679694469648/bin/yarn
npm: 9.6.0 - ~/Library/Caches/fnm_multishells/4286_1679694469648/bin/npm
Browsers:
Safari: 16.1
Used Package Manager
yarn
Validations
- [X] Follow our Code of Conduct
- [X] Read the Contributing Guidelines.
- [X] Read the docs.
- [X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
- [X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server.
- [X] The provided reproduction is a minimal reproducible example of the bug.
I also noticed something similar. I think there was a comment in another issue that 100 CPU leads to process.exit override by execa 🤔
I am currently on my phone, so I cannot provide more information right now, but we should definitely look into it.
@NuroDev as work-around for now you can use poolMatchGlobs
here. I quickly tested it in your reproduction case and it seems to work.
export default defineConfig({
test: {
poolMatchGlobs: [
// This test prevents worker_thread from terminating
["**/spacex/tests/starlink.test.ts", "child_process"],
],
},
});
This issue is kind of duplicate of https://github.com/vitest-dev/vitest/issues/2008, but the reproduction case here is best one so far. I can reproduce the process hang here on every run and there are not that many test cases in total. Earlier I used trpc
's repository which run into this issue on ~70% of the runs.
I can look into this more later. But likely something in the starlink.test.ts
is preventing the worker_thread
from terminating when worker.terminate()
is called. First thing to check is whether there are any native node addons in dependencies using NAPI directly.
I was able to reproduce this issue using just Node API's and undici
: https://github.com/nodejs/undici/issues/2026
I didn't look into their codebase that much but there seems to be some native code: https://github.com/nodejs/undici/tree/main/deps/llhttp/src
Encountered the same issue on Node 20. Seems undici is the cause. By stubbing the global fetch with node-fetch the issue was resolved.
I seem to have the same issue on GitHub Actions. So far no problem on Windows.
Has there been any further investigation to the root cause of this? I am running into it running in Azure Devops as well but do not use the above unidici library mentioned anywhere in the project so I think it's more generic. Trying the above workaround to see if it helps but curious if there has been any further insight into what might cause this?
Has there been any further investigation to the root cause of this? I am running into it running in Azure Devops as well but do not use the above unidici library mentioned anywhere in the project so I think it's more generic.
I've only reported the issue to NodeJS issue board but that's it. Note that this also happens with native fetch
in NodeJS - you don't need to install undici
explicitly.
I've only seen reproduction setups where fetch
caused the worker_threads
to get stuck. Other cases have not been reported as far as I know.
I think that I am running into this issue as well. Unfortunately for my project, I am stuck between a rock and a hard place. I will either have to live with the hanging issue until it gets fixed upstream or downgrade to MSW 1.x. It appears that MSW 2.0 doesn't support polyfilling fetch and requires the native node fetch API so at least in my scenario, fixing it isn't as easy as replacing node's native fetch API with node-fetch.
Where MSW says that they will not support polyfilling fetch.
Node's fetch
doesn't get stuck when run inside node:child_processs
. It's only the node:worker_threads
which has these issues.
On Vitest you can use --pool=forks
to run all tests inside node:child_process
. Or if you are using [email protected] releases, use --no-threads
.
In my case, node-fetch is already stubbed - as is for every project using happy-dom with vitest, since happy-dom uses node-fetch under the hood. The issue however is present, so node-fetch is not the solution.
// @vitest-environment happy-dom
import { test } from "vitest";
import nodeFetch from "node-fetch";
test("fetch", () => {
console.log(fetch === nodeFetch);
// stdout | tests/happy-dom.test.ts > fetch
// false
});
@raegen I'm happy to look into reproduction cases where using node-fetch
with --pool=threads
doesn't work.
I also created a simple repo to try to reproduce this issue. https://github.com/InfiniteXyy/vitest-hang-issue
In my experiment, I used the library dompurify
and encountered the hanging issue.
What I'm sure that without dompurify
, tests will never hang in my case
There are also some other scenarios that I have tested, but I'm not sure if they have a direct connection to the issue:
- It is more likely to hang when test cases failed.
- Node 18/19 will hang, while it seems that there are no issues with version 20+.
- Setting
globals: true
and using the global vitest function increases the possibility of hanging. - The higher count of tests, the higher the chance of hanging.
@InfiniteXyy I don't think that is related to this issue as it's not using fetch
. You should file separate bug with that minimal reproduction case instead.
+1 , I am using nodejs fetch api. Most cases are working well, but one of them faced the issue problem. Not 100% happens.
Node's
fetch
doesn't get stuck when run insidenode:child_processs
. It's only thenode:worker_threads
which has these issues.On Vitest you can use
--pool=forks
to run all tests insidenode:child_process
. Or if you are using [email protected] releases, use--no-threads
.
I ran into this issue today, and using --pool=forks
helped me resolve it. I set it in the vite.config.ts
file; this might be helpful for someone.
"vitest": "1.2.1"
vite.config.ts:
test: {
globals: true,
environment: 'jsdom',
setupFiles: './src/test/setup.tsx',
// Setting pool='forks' is preventing this issue https://github.com/vitest-dev/vitest/issues/3077
pool: 'forks',
},
Some seemingly good news from upstream: https://github.com/nodejs/undici/issues/2026#issuecomment-2000017934