jsdom-worker
jsdom-worker copied to clipboard
A jest worker process was terminated by another process
error detail: signal=SIGSEGV, exitCode=null. Operating system logs may contain more information on why this occurred
my test case:
import { QueryClient, QueryClientProvider } from '@tanstack/react-query'
import { cleanup, render } from '@testing-library/react'
import { Form } from '../index'
export const client = new QueryClient({
defaultOptions: {
queries: {
refetchOnWindowFocus: false,
retry: 1
}
}
})
afterEach(cleanup)
it('create test', () => {
const { container } = render(
<QueryClientProvider client={client}>
<HiveForm />
</QueryClientProvider>
)
const name = container.querySelector('#name')
expect(name).toBeInTheDocument()
})
hi @nfwyst did you ever figure it out? I have the same issue
same issue here
found this thread while researching the jest bug, but I think this is caused by jest itself https://github.com/jestjs/jest/issues/13976
+1. Critical error for a test framework.
Have the same problem.
Bump. OS logs ain't got anything.
Same issue.
same.
same
Same issue.
Same issue
PID 90503 received SIGSEGV for address: 0x0
0 segfault-handler.node 0x00000001119e1190 _ZL16segfault_handleriP9__siginfoPv + 296
1 libsystem_platform.dylib 0x000000018fa1ea24 _sigtramp + 56
2 node 0x0000000104512fcc _ZN4node6loaderL23ImportModuleDynamicallyEN2v85LocalINS1_7ContextEEENS2_INS1_4DataEEENS2_INS1_5ValueEEENS2_INS1_6StringEEENS2_INS1_10FixedArrayEEE + 232
3 node 0x00000001047e9a2c _ZN2v88internal7Isolate38RunHostImportModuleDynamicallyCallbackENS0_6HandleINS0_6ScriptEEENS2_INS0_6ObjectEEENS0_11MaybeHandleIS5_EE + 852
4 node 0x0000000104bcbd9c _ZN2v88internal25Runtime_DynamicImportCallEiPmPNS0_7IsolateE + 276
5 node 0x0000000104f152c4 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvInRegister_NoBuiltinExit + 100
6 node 0x0000000104faa1bc Builtins_CallRuntimeHandler + 92
7 node 0x0000000104ea0198 Builtins_InterpreterEntryTrampoline + 248
8 node 0x0000000104ea0198 Builtins_InterpreterEntryTrampoline + 248
9 node 0x0000000104ea0198 Builtins_InterpreterEntryTrampoline + 248
10 node 0x0000000104ea0198 Builtins_InterpreterEntryTrampoline + 248
11 ??? 0x0000000109edd0d8 0x0 + 4461547736
12 ??? 0x000000010a0393f0 0x0 + 4462973936
13 ??? 0x0000000109e9c108 0x0 + 4461281544
14 ??? 0x0000000109f88ba4 0x0 + 4462250916
15 ??? 0x0000000109ee1f8c 0x0 + 4461567884
16 ??? 0x0000000109ee23b8 0x0 + 4461568952
17 ??? 0x0000000109ed79c0 0x0 + 4461525440
18 ??? 0x0000000109fab148 0x0 + 4462391624
19 ??? 0x0000000109ede4d8 0x0 + 4461552856
20 ??? 0x0000000109f5bd90 0x0 + 4462067088
21 ??? 0x0000000109f781a0 0x0 + 4462182816
22 node 0x0000000104ea0198 Builtins_InterpreterEntryTrampoline + 248
23 node 0x0000000104ea0198 Builtins_InterpreterEntryTrampoline + 248
24 node 0x0000000104ea0198 Builtins_InterpreterEntryTrampoline + 248
25 node 0x0000000104ed1ef4 Builtins_AsyncFunctionAwaitResolveClosure + 84
26 node 0x0000000104f60738 Builtins_PromiseFulfillReactionJob + 56
27 node 0x0000000104ec3c4c Builtins_RunMicrotasks + 588
28 node 0x0000000104e9e3a4 Builtins_JSRunMicrotasksEntry + 164
29 node 0x00000001047cf9ac _ZN2v88internal12_GLOBAL__N_16InvokeEPNS0_7IsolateERKNS1_12InvokeParamsE + 2680
30 node 0x00000001047cfe9c _ZN2v88internal12_GLOBAL__N_118InvokeWithTryCatchEPNS0_7IsolateERKNS1_12InvokeParamsE + 88
31 node 0x00000001047d0078 _ZN2v88internal9Execution16TryRunMicrotasksEPNS0_7IsolateEPNS0_14MicrotaskQueueEPNS0_11MaybeHandleINS0_6ObjectEEE + 64
Same issue.
node --expose-gc ./node_modules/.bin/jest --config ./jest.config.json --no-cache --logHeapUsage --forceExit -maxWorkers=6
This works fine:
node --expose-gc ./node_modules/.bin/jest --config ./jest.config.json --runInBand --forceExit
Same issue.
Same issue.
Same issue
same issue
same issue
Same issue, This happens when I change my docker image from a Debian-based to Alpine, maybe this helps
Same issue, any plans on this? Thanks
I have it as well. Jest version 29.6.0
same issue, hard to debug. It's quite possible that one test is disrupting the others, but how to isolate it?
I faced the same error message. In my case the jest tests failed because their worker processes were killed by the OS oom (out of memory) killer. If you run it with the --runInBand option then it runs the tests in sequence, otherwise it uses the number of available cores minus 1. If your tests use a lot of memory it can easily exhaust it.
In my case I was running it inside an Alpine docker container. I needed to run it in privileged mode: docker run --privileged -e "container=docker" -it ..... to be able to see the dmesg output and find it out.
It can also die because the Node.js process running it has insufficient heap space. You can try to set it to 4GB as a start, for example: NODE_OPTIONS=--max_old_space_size=4096
If you are using yarn then you can also examine the heap usage with something like this: yarn node --expose-gc $(yarn test:my_app --ci --maxWorkers=1 --logHeapUsage) where yarn test:my_app runs the jest test suites.
Had similar issue:
A jest worker process (pid=X) was terminated by another process: signal=SIGBUS, exitCode=null. Operating system logs may contain more information on why this occurred.
Deleting node modules and reinstalling fixed issue.