Reporters data is bloated compared to v3.1.x
Describe the bug
Hey.
We found that using a newer version than 3.1.4 creates larger reporters data which fails some reporters.
On 3.2.4, this created a situation for us that the html reporter failed (RangeError: Invalid string length), and on 4.0.15 it failed with the blob reporter (on the same reason).
In the repro we only see small increase, but in large projects (we have ~10,000 tests in ~15 projects), this can go as high as 20 times (this is not a mistake, 20 times larger) difference.
When sharding, blob reporter did finish successfully, but merging failed silently (we found that the html reporter swallows the exception).
Reproduction
Blob size with v3.1.4 (stackblitz):
Blob size with v4.0.15 (stackblitz):
System Info
System:
OS: Linux 5.0 undefined
CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Memory: 0 Bytes / 0 Bytes
Shell: 1.0 - /bin/jsh
Binaries:
Node: 20.19.1 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.8.2 - /usr/local/bin/npm
pnpm: 8.15.6 - /usr/local/bin/pnpm
npmPackages:
@vitest/ui: 4.0.15 => 4.0.15
vite: latest => 6.4.1
vitest: 4.0.15 => 4.0.15
Used Package Manager
npm
Validations
- [x] Follow our Code of Conduct
- [x] Read the Contributing Guidelines.
- [x] Read the docs.
- [x] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
- [x] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server.
- [x] The provided reproduction is a minimal reproducible example of the bug.
Comparing the reports on the minimal repro, I think the only difference is the metrics from import durations:
- https://github.com/vitest-dev/vitest/issues/8026
Those are collected always. There is no way to opt-out even if you don't use that data for anything.
Can you create a reproduction that demonstrates the reporter crash? I guess that can happen in v3 too if the results are just too large.
Thanks for getting back on this.
Those are collected always. There is no way to opt-out even if you don't use that data for anything.
But can they be omitted when creating the report (I tried editing the reporter inside node_modules which helped)?
Can you create a reproduction that demonstrates the reporter crash? I guess that can happen in v3 too if the results are just too large.
That would be tough, since it requires a giant suite of tests... Not sure how to approach. The failure is due to large data coming in to reporters that want to stringify it. We noticed this fails similarly in v3.2.4, but there it only fails in the html reporter, not the blob one
I made artificial test suites to show blob json size increase between v3.1.4 and v4.0.15 https://github.com/hi-ogawa/vitest-9216-repro The increase seems significant when there are many dependencies.
I'm not sure if there's a room for optimization. Otherwise, we need a way to opt-out certain reports at certain stage, either don't track some data at all internally, or just exclude some data at reporter level before serializing raw result to a huge string.
Marking this as reporter bug due to the crash. But something should be done to the metrics collected by test runners too, if possible.
Amazing repro @hi-ogawa! I can confirm we have a dependencies problem. We're working on breaking a monolith to packages
@idanen In your large project, do you generally deal with memory issues on Vitest (other than new reporter issues specifically)? For example, do you set --max-old-space-size on your CI?
For the context, with some streaming technique, I was able to avoid RangeError: Invalid string length, but I saw OOM error instead, so I'm wondering whether I can assume the default memory being increased on a large project on such scale.
Also I'd like to know what's the original blob.json size before you hit this issue?
@idanen In your large project, do you generally deal with memory issues on Vitest (other than new reporter issues specifically)? For example, do you set
--max-old-space-sizeon your CI? For the context, with some streaming technique, I was able to avoidRangeError: Invalid string length, but I saw OOM error instead, so I'm wondering whether I can assume the default memory being increased on a large project on such scale.
Also I'd like to know what's the original
blob.jsonsize before you hit this issue?
@hi-ogawa yes we do. We set this flag on our CI to 16g which also for newer versions of Vitest hit OOM. Regarding the size, we're running on 2 shards so we have 2 blob files, each of them is ~500mb (on v3.1.4 it's ~15mb)