iso-bench icon indicating copy to clipboard operation
iso-bench copied to clipboard

Questions 😅

Open mindplay-dk opened this issue 1 year ago • 8 comments

First off, this looks really promising - I'm hoping this might be exactly what I've been looking for.

Okay, so, question 1:

It's not actually clear to me how this API achieves isolation of the benchmarked dependencies. Do you have to create multiple scopes, and then run your tests individually, or what's the idea?

I mean, you only get to set up the arguments for your tests once, in the constructor argument to Scope - so if you have different units you want to test, and you need to avoid loading them to avoid cross-pollution, well, how?

import { IsoBench } from "iso-bench'"
import { function1 } from "module1"
import { function2 } from "module2"

const scope = new IsoBench.Scope({}, () => [function1, function2])

scope
  .add('function1', (function1) => {
    function1()
  })
  .add('function2', (_, function2) => {
    function2()
  })
  .result()
  .run()

I mean, this doesn't effectively isolate function1 from function2, does it? They've both been loaded - so even if you're not using them both in each test, there will be cross-pollution here, or not?

Question 2:

How do you get the results? The run method returns Promise<void>.

Do they just get printed on screen, or what's the idea?

Question 3:

Any idea if this should work with tsx aka typescript-execute?

All I've managed to get thus far is a ReferenceError saying the symbol is not defined.

I noticed you're compiling ahead-of-time with tsc, and I don't fully understand the V8 wizardry you're doing with this library, so I'm not sure if this is expected to work or not?

I tried copy-pasting some examples from your test and couldn't get those to work either.


Here's a repo with a minimal preliminary iso-bench setup for the thing I'm trying to benchmark:

https://github.com/mindplay-dk/sigma/blob/try-iso-bench/benchmarks/src/json/index.ts

When I run it, it just prints:

parseSigmaDefer - ReferenceError: import_sigma is not defined
[TESTS COMPLETED]

I tried without the async/await wrapper as well - also not sure if that's expected to work or not? But I figured, if I want to test these two functions, it can't happen in the same Scope instance, since that forces me to create both test subjects at the same time?

If you can help me figure this out, I'd like to help improving the README - it spends a lot of time framing the problem, and explaining implementation details, and it's great to have this information somewhere, but it's probably not what most people need first thing to start writing a benchmark.

I'm trying to solve the benchmarking problem for the sigma project that I'm currently contributing to, and this might make a good first showcase for this library.

If I can get it working, I also might hop in and try to help with the library itself - it doesn't look like it's doing much in terms of statistics on the actual measurements, it's just an average, I think? I have some code lying around that could probably improve the stability of the output numbers. 🙂

mindplay-dk avatar Aug 16 '23 11:08 mindplay-dk