mocha
mocha copied to clipboard
Should timing badges take into account beforeEach/afterEach?
My beforeEach functions are pretty slow, and sometimes even dominate test time. (I set up test fixture data and initialize an Ember app.) But the little timing badges that show up in the browser (above 50ms or so) do not include this time, they only seem to measure the time of the test functions themselves.
I'm thinking that it would be sensible for the badges to include the beforeEach/afterEach time.
(Perhaps there are also other parts where these timings are used, like timeouts -- I'm fairly new to mocha.)
What do you think?
hmm tough call, maybe as a flag or something, I wouldn't want to see them all the time personally but yeah I see what you're saying
I was just about to file a bug about this.
[snip]
✔ 268 tests complete (1116ms)
make test 0.96s user 0.11s system 8% cpu 13.048 total
Personally I think the beforeEach/etc.. timing details are more important then the assertions. Typically I put the most time intensive stuff in my beforeEach (setting up the fixture, dom, whatever) and then run my assertions against that which takes virtually no time by comparison.
+1
You could just setup a helper function that does all your setup, and then call it inside your test. This way, you explicitely test your setup, and it would be included in the time measurements.
+1 from me
@emilecantin: well, that makes the beforeEach function obsolete ;)
@andihit: Not really, before / beforeEach is for setup that is outside of the scope of your test case. It aims to provide an environment in which you can then run your test. If your setup is relevant to the success / failure / performance of your test, then it belongs IN the test...
If your setup is relevant to the success / failure / performance of your test, then it belongs IN the test...
In what case is the setup not relevant for the success/failure of the test?
Typically I put the most time intensive stuff in my beforeEach (setting up the fixture, dom, whatever) and then run my assertions against that which takes virtually no time by comparison.
Same here ;)
In what case is the setup not relevant for the success/failure of the test?
For example, when I am testing an object's methods, the instantiation of said object is not relevant; it's not what I test. Instantiation is part of another, unrelated test.
Technically it is relevant that your object is initialized when you test methods of it. But I get what you mean, it's not what you test in a particular test.
Maybe we want a second badge of each test which shows the duration of the beforeEach (and afterEach)? Or show detailed information on hover (before, beforeEach, test, afterEach, after durations)?
I think detailed information on hover would be a nice addition. While I don't think beforeEach / afterEach timings shouldn't be included in the test's timings, there's no reason to not measure it. On the command line, we could add a '--detailed' flag, or an alternate reporter.
@emilecantin yeah i agree, i think a reporter would be the cleanest route, dot and some of the others wouldn't really work, or maybe just a --verbose
variant of spec and list
+1 If not included in test timing, just a separate timing log for the before/after functions.
in #700 this was brought up again, I think for now maybe aggregating them would be safest. This gets a little tricky since multiple beforEach's can be used even at the same level so describing them in the output would be very verbose unless it's opt-in
Looks like this was never completed:
$ cat example.js
beforeEach(function(done) {
setTimeout(done, 180);
});
it('foo', function(done) {
setTimeout(done, 100);
});
it('bar', function(done) {
done();
});
$ ./bin/mocha --reporter spec example.js
✓ foo (101ms)
✓ bar
2 passing (468ms)
I could come up with a PR for this. The resulting output would be:
$ ./bin/mocha --reporter spec example.js
✓ foo (281ms)
✓ bar (181ms)
2 passing (468ms)
$ ./bin/mocha --reporter --verbose spec example.js
beforeEach (181ms)
✓ foo (281ms)
beforeEach (181ms)
✓ bar (181ms)
2 passing (468ms)
To clarify, I'd probably keep the current behavior of having timeouts apply to runnables, and not aggregated test execution time.
@danielstjules It would be also be nice to show the name of the hook, for when you have multiple ones.
@danielstjules go for it! :)
@danielstjules would love this!
cc @Nokel81; you might want to consider this when looking at the stats gathering.
Great idea, the aggregater should have access to this information so reporters will be able to start using it
@boneskull @Nokel81 I created a PR https://github.com/mochajs/mocha/pull/3776 (based on https://github.com/mochajs/mocha/pull/2244) which introduces an option --time-setup
to include the duration of beforeEach
hooks in the duration of each test. I have found this useful for optimizing tests in my own projects.
bump, seeing that the pr is still open, is it still being worked on? what are the alternatives?
Would be awesome to get #3776 merged, it's exactly the solution I need. My use case is CircleCI test splitting which uses the timing data from the junit reporter, which is completely off if it doesn't include beforeEach
.
I ran into this and was sadden to see that the PR #3776 is still outstanding, so I set out to see if there was some way I could hack this behaviour into current MochaJS, and this is what I came up with:
let start = performance.now()
export const mochaHooks = {
beforeEach: async function () {
const currentTest = this.currentTest
const originalRun = currentTest.run
currentTest.run = function (fn) {
originalRun.call(currentTest, function (...args) {
const end = performance.now()
currentTest.duration = Math.round(end - start)
start = end
fn(...args)
})
}
}
}
It's put into a --require
file which adds a global hook.
I have made it such that start
is set immediately and updated after each test, this ensures all time spent between tests are included, so any before
/ beforeEach
/ after
/ afterEach
etc. is counted in the duration.
It's kind of hackish, but works.