mocha
mocha copied to clipboard
🚀 Feature: Make parallel mode smarter
I have a couple ideas for improvements here now that we're past the MVP:
- Detect if we're only running a single file, and disable parallel mode automatically. Running a single file in parallel mode will always (AFAIK) be slower than running in serial. Add a command-line option to disable this behavior (a general-purpose, contextual
--force
might be helpful) - Automatic optimization via duration caching
- Cache per file durations (cache could live in
node_modules/.cache/mocha
, which is an unofficial convention).- This would be timing the
run()
call inlib/nodejs/worker.js
, from beginning to end. - Maybe calculate the mean over n most recent runs?
- This would be timing the
- On subsequent runs, execute slowest test files first. This will help avoid the case at the end of the run where there's only a single worker process, munching on a meaty test file, and the other workers are idle. TypeScript uses a strategy like this in their custom tooling around Mocha (look at their implementation for ideas).
- Cache per file durations (cache could live in
(Note new parallel
label)
@boneskull
hello, I read this issue and working on it and came up with two questions.
first, where to change. I found mocha has two spots to turn on parallel function,
- parallelMode function (in lib/mocha.js)
- parallelrun (in lib/run-helpers)
Because parallelMode function comes first, I think it would be better to check and switch in parallelMode function if the file is better with running in single mode. I want to know your opinion.
second, the testing size would it be enough only with a single file? maybe two files, for three.. I also want to know your opinion on this!
thank you for your time.
This would be really nice for performance - other test runners such as Jest take similar strategies. Since parallel runs aren't very deterministic right now, I don't think it'd likely be received as a big breaking change for users. But slapping the semver-major
label on just to be safe.