feat: add hooks in bench mode
Description
- This PR allows hooks to be used when running
vitest bench. - This is essentially an updated fork of #5076.
- Closes #5075.
- Creating this as we'd like to use hooks in benchmarks in https://github.com/ariakit/ariakit/pull/4415
Please don't delete this checklist! Before submitting the PR, please make sure you do the following:
- [x] It's really useful if your PR references an issue where it is discussed ahead of time. If the feature is substantial or introduces breaking changes without a discussion, PR might be closed.
- [x] Ideally, include a test that fails without this PR but passes with it.
- [x] Please, don't make changes to
pnpm-lock.yamlunless you introduce a new test example.
Tests
- [x] Run the tests with
pnpm test:ci.
Documentation
- [ ] If you introduce new functionality, document it. You can run documentation with
pnpm run docscommand.
Changesets
- [x] Changes in changelog are generated from PR name. Please, make sure that it explains your changes in an understandable manner. Please, prefix changeset messages with
feat:,fix:,perf:,docs:, orchore:.
Deploy Preview for vitest-dev ready!
Built without sensitive environment variables
| Name | Link |
|---|---|
| Latest commit | e621d3affac45afb67496b016dac06c26081ae0d |
| Latest deploy log | https://app.netlify.com/sites/vitest-dev/deploys/67bacc987853ef00086925b2 |
| Deploy Preview | https://deploy-preview-7541--vitest-dev.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
I spoke with a team member in the Discord and they mentioned that windows tests are flaky, even in main. That's why it's marked as ready for review.
Is this still in limbo with flaky Windows tests? My team is chomping at the bit to start using this. 🥺
Is this still in limbo with flaky Windows tests? My team is chomping at the bit to start using this. 🥺
I'll update the branch and see if there are still issues.
Just to be clear, is this unrelated to https://tinylibs.github.io/tinybench/interfaces/FnOptions.html#beforeeach and https://tinylibs.github.io/tinybench/interfaces/FnOptions.html#aftereach which allows running code in every iteration that is not included in the total time?
The FnOptions are crucial to allow measuring anything that has a large setup time like setting up a db or a file system before each iteration.
Just to be clear, is this unrelated to https://tinylibs.github.io/tinybench/interfaces/FnOptions.html#beforeeach and https://tinylibs.github.io/tinybench/interfaces/FnOptions.html#aftereach which allows running code in every iteration that is not included in the total time?
The FnOptions are crucial to allow measuring anything that has a large setup time like setting up a db or a file system before each iteration.
So you're concerned that the current implementation of our Vitest hooks will are part of the execution time? I think it would in it's current state and I don't believe it's what is intended. I can fix.
So you're concerned that the current implementation of our Vitest hooks will are part of the execution time? I think it would in it's current state and I don't believe it's what is intended. I can fix.
I was actually more concerned that we are not exposing the tinybench setup/teardown and how they can be exposed in a consistent manner to vitest bench tasks.
So you're concerned that the current implementation of our Vitest hooks will are part of the execution time? I think it would in it's current state and I don't believe it's what is intended. I can fix.
I was actually more concerned that we are not exposing the tinybench setup/teardown and how they can be exposed in a consistent manner to vitest bench tasks.
@arv They're already exposed I believe, as the second parameter in a Vitest benchmark. Just forwards the object to TinyBench.
Do you have a recommendation of how you'd imagine they should be exposed?
Benchmarks being reimagined - I'll close this.
@waynevanson is there a PR/issue I can follow?
Yes @arv, github.com/vitest-dev/vitest/discussions/7850
I've also created this package to run tests as benchmarks which is currently in development. https://github.com/waynevanson/vitest-runner-benchmark