Running the text.test.js test, the results obtained are very poor and differ many times from the expected values in the code.
└──( nr test:karma:bench
yarn run v1.22.19 $ cross-env PERFORMANCE=true COVERAGE=false BABEL_NO_MODULES=true karma start karma.conf.js --grep=test/benchmarks/text.test.js --single-run
START: 02 02 2024 22:44:57.598:INFO [esbuild]: Compiling... Browserslist: caniuse-lite is outdated. Please run: npx update-browserslist-db@latest Why you should do it regularly: https://github.com/browserslist/update-db#readme [BABEL] Note: The code generator has deoptimised the styling of /Users/wellbye/repo/lib/preact11/node_modules/lodash/lodash.js as it exceeds the max of 500KB. 02 02 2024 22:44:58.585:INFO [esbuild]: Compiling done (987ms) 02 02 2024 22:44:58.587:INFO [karma-server]: Karma v6.4.1 server started at http://localhost:9877/ 02 02 2024 22:44:58.587:INFO [launcher]: Launching browsers ChromeNoSandboxHeadless with concurrency 2 02 02 2024 22:44:58.589:INFO [launcher]: Starting browser Chrome 02 02 2024 22:44:59.175:INFO [Chrome Headless 121.0.6167.139 (Mac OS 10.15.7)]: Connected on socket kpGVdZThBOXaHtdWAAAB with id 31377209
in-place text update is 26.33x slower: vanilla: 5840 kHz preactX: 221 kHz (-96%)
benchmarks ✖ in-place text update
Finished in 12.174 secs / 12.166 secs @ 22:45:11 GMT+0800 (China Standard Time)
SUMMARY: ✔ 0 tests completed ✖ 1 test failed
FAILED TESTS: benchmarks ✖ in-place text update Chrome Headless 121.0.6167.139 (Mac OS 10.15.7)
Error: Uncaught AssertionError: expected 26.325447120758664 to be below 10 (node_modules/chai/chai.js:250)
error Command failed with exit code 1.
I am not sure what you are telling us in the issue nor what the goal here is 😅
Simply put, there is a test case that has failed.
To elaborate, the test should have considered that operating on text nodes through Preact is only 10 times slower than using DOM API operations directly. However, when running on my M1 computer, it shows a slowdown of 26 times, which is far beyond the expected outcome. Does this indicate there are still performance issues?
Closing this as running benches locally is subject to a lot of factors, we rely on our benchmarking tools that run on a real browser in CI (much weaker machines) to surface these issues