Very poor performance on basic tests
🐛 Bug Report
I've created a simple test with no configuration. You can find it here. It takes 7.6 seconds just to start jest, and then, on file change, it takes 5 seconds to re-run. There's clearly something wrong here.
Mocha and Chai, for example, start in around 4 seconds, but afterward, the re-run is almost instant.
It's weird that Jest reports that the test takes 14ms but shows me the result in 5 seconds.

On top of this, if you add more basic tests, the time just seems to go up.

To Reproduce
Clone my minimal repository, yarn install and yarn test.
Expected behavior
It should be faster than this.
Link to repl or repo (highly encouraged)
https://github.com/cozmy/jest-performance
Run npx envinfo --preset jest
Paste the results here:
System:
OS: Windows 10
CPU: (8) x64 Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz
Binaries:
Node: 8.12.0 - C:\Program Files\nodejs\node.EXE
Yarn: 1.13.0 - C:\Users\CosminCristian\AppData\Roaming\npm\yarn.CMD
npm: 6.8.0 - C:\Program Files\nodejs\npm.CMD
Your repo takes way less time for me:
4ms test case
0.7s according to jest
1.4s according to time
Reruns are pretty much instant.
On Windows, startup performance (of any Node process and especially if you do a lot of IO) will always be noticeably slower. You can try upgrading the Node version to 10 or 11 (8 is quite old), I'd expect a 10-20% speedup coming from that. I'd try to profile the performance issue a bit, but I can't reproduce the terrible performance on my systems :/
Test: 8-10ms Total: 2.4-2.8s
Reruns are very erratic if the rerun was triggered by a file change:
Test: 2-14ms Total: 0.8-3.3s (typically on the upper end though, <2.5s is uncommon).
If it was triggered manually then it is pretty consistent with the first run. Interestingly, if I run with --no-cache then the first run is similar, but reruns on file changes are 0.8-0.9s and manual reruns are <0.2s. I think this might be because it throws away the timings of each run and causes Jest to run in-band (no extra workers). I usually run in-band on Windows because it's nearly always the fastest option, at least in watch mode. In WSL the performance is not quite as bad (though well short of native Linux) so I typically let Jest do whatever it wants.
As a side note, my runs seems to have been quite a bit faster than yours despite this machine having pretty dated hardware. That's pretty strange in itself.
System:
OS: Windows 8.1
CPU: x64 Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
Binaries:
Node: 8.11.3
Yarn: 1.7.0
npm: 5.6.0
npmPackages:
jest:
wanted: ^24.1.0
installed: 24.1.0
To give you a bit of background, I was trying to set up the test environment on my work laptop and I noticed that my React tests were way to slow. That's when I've discovered the poor performance. I've also tested this on Linux and it's 2x as fast, but still, 3seconds on re-run for some basic tests it's too much.
It's weird that Jest reports that a test runs in 10ms but the reported time is 5seconds.
Even on my work laptop, running the tests on create-react-app are slow, on re-run:

I guess this issue is a duplicate of https://github.com/facebook/jest/issues/7619 and https://github.com/facebook/jest/issues/6694 so most likely it can be closed.
Mocha doesn't take 4 seconds to initialise in this case. I changed your test to do require('assert').equal(1, 112) and it took less than 200ms:
fabio@fabio-thinkpad ♥ time mocha index.spec.js
jest performance
1) should be fast
0 passing (6ms)
1 failing
1) jest performance
should be fast:
AssertionError [ERR_ASSERTION]: 1 == 112
+ expected - actual
-1
+112
at Context.<anonymous> (index.spec.js:3:23)
real 0m0.173s
user 0m0.153s
sys 0m0.024s
Which by the way, if you add another test it doesn't take 173ms * 2 to run.
I've never seen a testing framework that takes more than 500ms to perform a single equality check. There must be some configuration option we can set here, and if there is, I would like to make the case for it to be in the defaults.
Just copied the original test 50 times to make sure. Results:
real 0m0.193s
user 0m0.157s
sys 0m0.040s
Tests run fine for me.

I tried this out on my Mac and got run times of 3-4s, so a bit closer to what you are describing. There is nothing that takes way longer than expected in the profile though, as I said it's just that IO performance is very OS-dependent, most of this is require loading modules:
One thing caught my eye in the flame chart though
Setting up a JSDOM environment is very expensive (the middle part of the chart). Setting testEnvironment to node if the tests don't need it cut the test time in half for me. I find it a bit unfortunate that the default is jsdom. @SimenB has there been any discussion on the default testEnvironment?
Also, I noticed that watch mode is not running inBand, which I think it should in this case. After setting the test environment to node and passing --watchAll --runInBand reruns are pretty much instant. I'll try to find out by Jest doesn't decide to runInBand automatically in this case.
@SimenB has there been any discussion on the default testEnvironment?
Yes, #5815. If you use jest --init, it defaults to node as well. I'd forgotten #5815 was accepted, would be nice to get done for 25.
After some investigation, the reason why Jest does not run the test inBand is the following:
- On the first run, we do not know whether the one test will be fast, so we decide not to
runInBandin order to keep the main thread unblocked and make sure the TTY remains responsive - Because this first run is not
inBand, the overhead causes it to take about 2.2s on my device (even without JSDOM) - On rerun, Jest sees that the test took longer than 1s (the current
SLOW_TEST_TIMEthreshold), so it again decides not torunInBand - You will never get out of this until you force
runInBandonce (and do one rerun, because the firstrunInBandexecution also has to load everything which is slow), which causes the test time to go down to <100ms
I don't have a good solution to this other than making the threshold more generous right now
Because this first run is not inBand, the overhead causes it to take about 2.2s on my device (even without JSDOM)
Can you test with node worker threads? Still some overhead, but maybe not so much
@SimenB no noticeable change. jasmine2() has 1.5s total time when it initially requires all the modules, it's not really the worker overhead itself. In band the modules will be loaded after the first run so subsequent runs are almost instant.
(jest-circus is not significantly faster to load either)
There's no mechanism in place to reuse workers across watch runs, is there? Would it be unsafe to do so in case there is some kind of leftover state in the worker?
Workers are always reused, do you mean the sandbox? If so, no - we want every test to run in isolation
However, we want to lift jasmine/circus out of the sandbox and inject some integration points. Might help?
Workers are always reused
That's not what I'm seeing though, this is the first part of a rerun:

This is (like some other issues mentioned) definitely a regression introduced in 23.0.0
In 22.4.4, jasmine2() only takes about 600ms (less than half the time)
That should be fixed - the worker pool should be reused.
Could you bisect?
Will do, but it's not what causes the performance issue here. After quite a bit of profiling I've determined the culprit to be micromatch@3. I'll open a PR where we can discuss how to handle this.
I've determined the culprit to be
micromatch@3
Really? Interesting! Looking forward to the analysis 🙂
Still no conclusion? :(
See #8032 / #8046 for the latest info
any updates now that micromatch@4 has released?
Seeing as it's a major, we'll probably upgrade in Jest 25 to be on the safe side
@SimenB i took a look in diagonal to all the posts @jeysal posted. great work!!
questions
-
Can we put this micromatch fix at the begginning of the work for jest25? so we can play with it and see if it cover the major problem present on windows machines or needs more investigations?
-
You have a branch
nextfor futur release or it'smasterto contain the micromatch fix ?
Assuming no breaking changes, you can use yarn resolutions to test yourself.
https://yarnpkg.com/lang/en/docs/selective-version-resolutions/
I've been using yarn resolutions + micromatch v4 with no issues.
@jeysal well done!
A little followup my side with a repo test. https://github.com/nasreddineskandrani/benchmark_jest I took the initial code of this issue and did the test. It's working near jest@22 perf. if not better with micromatch4 and jest@24 so for me => it's conclusive. (check README(s))
waiting for jest@25 to try it at the job with Angular and npm (no way to yarn for now...)we have more than 2000 utests :) it ll be quick to see the diff.
@SimenB an idea: you should add an automated test to check the timing it takes in your CI with the previous released version vs the current branch of jest (or one run nightly against master). Let say if its more than 2x the previous time you fail it. I don't know if this way it will be a stable test => but jest needs a benchmark automation :)
would be nice to have the ability to see how long each test takes.
Looks like micromatch was bumped to 4.x in https://github.com/facebook/jest/pull/8852