[Bug]: A worker process has failed to exit gracefully and has been force exited.
Describe the bug
I get intermittent failures in CI, with the message:
A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --detectOpenHandles to find leaks. Active timers can also cause this, ensure that .unref() was called on them.
When this happens, multiple stories from the same story file all fail.
To Reproduce
I wish I knew how to reproduce this ðŸ˜. If someone can find a good way, that'd be awesome! It seems to happen in my more complex tests, if that's any clue.
System
System:
OS: macOS 14.5
CPU: (12) arm64 Apple M2 Max
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.11.1 - ~/Library/Caches/fnm_multishells/6880_1721246554780/bin/node
npm: 10.2.4 - ~/Library/Caches/fnm_multishells/6880_1721246554780/bin/npm
pnpm: 8.14.2 - ~/Library/Caches/fnm_multishells/6880_1721246554780/bin/pnpm <----- active
Browsers:
Safari: 17.5
npmPackages:
@storybook/addon-a11y: ^8.2.0-alpha.10 => 8.2.0-alpha.10
@storybook/addon-essentials: ^8.2.0-alpha.10 => 8.2.0-alpha.10
@storybook/addon-interactions: ^8.2.0-alpha.10 => 8.2.0-alpha.10
@storybook/react: ^8.2.0-alpha.10 => 8.2.0-alpha.10
@storybook/react-vite: ^8.2.0-alpha.10 => 8.2.0-alpha.10
@storybook/test: ^8.2.0-alpha.10 => 8.2.0-alpha.10
@storybook/test-runner: 0.19.2--canary.494.7268a7d.0 => 0.19.2--canary.494.7268a7d.0
@storybook/types: ^8.2.0-alpha.10 => 8.2.0-alpha.10
chromatic: ^11.4.0 => 11.4.0
eslint-plugin-storybook: ^0.8.0 => 0.8.0
storybook: ^8.2.0-alpha.10 => 8.2.0-alpha.10
Additional context
I have screenshots being taken when tests fail, and when this failure happens, I've seen that all the screenshots for that story file are the same. In most cases the page is blank, but I've also seen the same UI shown across all failures in the past. Which makes me think that jest is being disconnected from playwright, and no longer is able to change the page state, or something.
same here, but mine is a relatively simple dummy story 🫠intermittent issues
With CI agent resource bumped, less likely to see the issue but it's still there from time to time.
My CI node has 1cpu and 6gb rem with export 'NODE_OPTIONS=--max-old-space-size=4098'