events: refactored emit to improve performance
These changes improve the overall efficiency and maintainability of the emit method
- Refactored error handling logic for clarity and efficiency
- Streamlined error inspection and stringification process
- Enhanced stack trace handling with minimal performance overhead
- Simplified listener iteration and result handling in emit method
Fixes: #53056
Codecov Report
:white_check_mark: All modified and coverable lines are covered by tests.
:white_check_mark: Project coverage is 89.20%. Comparing base (08eeddf) to head (79a8f78).
:warning: Report is 1703 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #56741 +/- ##
==========================================
- Coverage 89.21% 89.20% -0.01%
==========================================
Files 662 662
Lines 191968 191958 -10
Branches 36955 36952 -3
==========================================
- Hits 171269 171242 -27
- Misses 13535 13552 +17
Partials 7164 7164
| Files with missing lines | Coverage Δ | |
|---|---|---|
| lib/events.js | 99.58% <100.00%> (-0.17%) |
:arrow_down: |
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
hello , I did a node version benchmark in my local, is this healthy? @anonrig @targos
Be aware that when doing many comparisons the risk of a false-positive result increases. In this case, there are 88 comparisons, you can thus expect the following amount of false-positive results: 4.40 false positives, when considering a 5% risk acceptance *, **, ***, 0.88 false positives, when considering a 1% risk acceptance **, ***, 0.09 false positives, when considering a 0.1% risk acceptance ***
string_decoder/string-decoder-create.js n=25000000 encoding='utf-8' at *** (improvement)
string_decoder/string-decoder.js n=2500000 chunkLen=1024 inLen=32 encoding='ascii' * (improvement)
string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=1024 encoding='ascii' * (improvement)
string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=32 encoding='ascii' * (improvement)
all result: https://ctxt.io/2/AAB4UTAXEQ
why running String-decoder benchmark ?
cc @mertcanaltin
Hi @KunalKumar-1 I compared the PR you submitted with node-main and observed a performance drop in the event emit benchmark results. Large optimization experiments can often yield such results in initial attempts, so thank you very much for this effort. Perhaps we can explore improvements through smaller, targeted optimization experiments instead.
If you want to take a look, you can compare the latest version of Node.js with your local branch: https://github.com/nodejs/node/blob/3ac0e28a7f190cc9593a0e7eb7fa6825e417a2f3/doc/contributing/writing-and-running-benchmarks.md?plain=1#L308
my benchmark result:
confidence improvement accuracy (*) (**) (***)
events/ee-emit.js listeners=1 argc=0 n=2000000 *** -38.14 % ±1.16% ±1.55% ±2.04%
events/ee-emit.js listeners=1 argc=10 n=2000000 *** -37.24 % ±2.68% ±3.60% ±4.77%
events/ee-emit.js listeners=1 argc=2 n=2000000 *** -38.92 % ±1.45% ±1.93% ±2.53%
events/ee-emit.js listeners=1 argc=4 n=2000000 *** -38.53 % ±0.72% ±0.96% ±1.26%
events/ee-emit.js listeners=10 argc=0 n=2000000 *** -6.64 % ±1.30% ±1.75% ±2.29%
events/ee-emit.js listeners=10 argc=10 n=2000000 ** -1.17 % ±0.69% ±0.92% ±1.20%
events/ee-emit.js listeners=10 argc=2 n=2000000 *** -2.60 % ±1.21% ±1.61% ±2.10%
events/ee-emit.js listeners=10 argc=4 n=2000000 *** -4.50 % ±1.35% ±1.79% ±2.34%
events/ee-emit.js listeners=5 argc=0 n=2000000 *** -17.39 % ±3.49% ±4.65% ±6.09%
events/ee-emit.js listeners=5 argc=10 n=2000000 *** -5.96 % ±2.06% ±2.76% ±3.62%
events/ee-emit.js listeners=5 argc=2 n=2000000 ** -5.54 % ±3.73% ±5.00% ±6.58%
events/ee-emit.js listeners=5 argc=4 n=2000000 *** -12.13 % ±3.90% ±5.19% ±6.75%
Be aware that when doing many comparisons the risk of a false-positive result increases.
In this case, there are 12 comparisons, you can thus expect the following amount of false-positive results:
0.60 false positives, when considering a 5% risk acceptance (*, **, ***),
0.12 false positives, when considering a 1% risk acceptance (**, ***),
0.01 false positives, when considering a 0.1% risk acceptance (***)
It is a great effort to take care of this problem and optimize this place. Thank you very much for showing this effort.
I believe the original issue was not a good path for improving the performance effectively as also seen in the benchmark results. I am going to close this, since this does not seem like the right thing to do. Thank you for the PR nevertheless!