Benchmark against similar libs
Hi, just in case you missed this:
https://github.com/modderme123/js-reactivity-benchmark
https://github.com/modderme123/reactively/blob/main/Reactive-algorithms.md#benchmarks
cc @modderme123 @ryansolid
ooh very useful thanks!
I will be interesting to compare creation / updation costs as well as GC pressure, between Signia's incremental computeds (diffing) vs. "standard" hybrid push + lazy/pull approach. Good write-up by the way :) https://signia.tldraw.dev/docs/scalability Transaction (batch + possible rollback) is a nice feature too. But note that Milo's benchmark only stress-tests core primitives in various graph topologies. This provides useful metrics but of course doesn't tell the full story (there's a separate UI benchmark for this) https://github.com/krausest/js-framework-benchmark
I will be interesting to compare creation / updation costs as well as GC pressure, between Signia's incremental computeds (diffing) vs. "standard" hybrid push + lazy/pull approach.
Yeah would be nice to have some metrics on this. The incremental stuff mostly becomes valuable for larger collections and/or more expensive operations, and I'm sure folks would appreciate being offered some intuition about that those kinds of sizes/operations are.
note that Milo's benchmark only stress-tests core primitives in various graph topologies. This provides useful metrics but of course doesn't tell the full story
Already found a significant win this morning thanks to these 😊 but in general yeah you're right. The microbenchmarks don't matter too much for real apps. The cost of the effects/derivations far outweigh the reactivity overhead, so the important points of comparison for pure signals libraries are stuff like features, DX, and integration with UI rendering.