node icon indicating copy to clipboard operation
node copied to clipboard

events: refactored emit to improve performance

Open KunalKumar-1 opened this issue 11 months ago • 5 comments

These changes improve the overall efficiency and maintainability of the emit method

  • Refactored error handling logic for clarity and efficiency
  • Streamlined error inspection and stringification process
  • Enhanced stack trace handling with minimal performance overhead
  • Simplified listener iteration and result handling in emit method

Fixes: #53056

KunalKumar-1 avatar Jan 24 '25 12:01 KunalKumar-1

Codecov Report

:white_check_mark: All modified and coverable lines are covered by tests. :white_check_mark: Project coverage is 89.20%. Comparing base (08eeddf) to head (79a8f78). :warning: Report is 1703 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #56741      +/-   ##
==========================================
- Coverage   89.21%   89.20%   -0.01%     
==========================================
  Files         662      662              
  Lines      191968   191958      -10     
  Branches    36955    36952       -3     
==========================================
- Hits       171269   171242      -27     
- Misses      13535    13552      +17     
  Partials     7164     7164              
Files with missing lines Coverage Δ
lib/events.js 99.58% <100.00%> (-0.17%) :arrow_down:

... and 42 files with indirect coverage changes

:rocket: New features to boost your workflow:
  • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

codecov[bot] avatar Jan 24 '25 13:01 codecov[bot]

hello , I did a node version benchmark in my local, is this healthy? @anonrig @targos

Be aware that when doing many comparisons the risk of a false-positive result increases. In this case, there are 88 comparisons, you can thus expect the following amount of false-positive results: 4.40 false positives, when considering a 5% risk acceptance *, **, ***, 0.88 false positives, when considering a 1% risk acceptance **, ***, 0.09 false positives, when considering a 0.1% risk acceptance ***

string_decoder/string-decoder-create.js n=25000000 encoding='utf-8' at *** (improvement)

string_decoder/string-decoder.js n=2500000 chunkLen=1024 inLen=32 encoding='ascii' * (improvement)

string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=1024 encoding='ascii' * (improvement)

string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=32 encoding='ascii' * (improvement)

all result: https://ctxt.io/2/AAB4UTAXEQ

mertcanaltin avatar Jan 27 '25 10:01 mertcanaltin

why running String-decoder benchmark ?

KunalKumar-1 avatar Jan 27 '25 12:01 KunalKumar-1

cc @mertcanaltin

KunalKumar-1 avatar Jun 09 '25 07:06 KunalKumar-1

Hi @KunalKumar-1 I compared the PR you submitted with node-main and observed a performance drop in the event emit benchmark results. Large optimization experiments can often yield such results in initial attempts, so thank you very much for this effort. Perhaps we can explore improvements through smaller, targeted optimization experiments instead.

If you want to take a look, you can compare the latest version of Node.js with your local branch: https://github.com/nodejs/node/blob/3ac0e28a7f190cc9593a0e7eb7fa6825e417a2f3/doc/contributing/writing-and-running-benchmarks.md?plain=1#L308

my benchmark result:

                                                 confidence improvement accuracy (*)   (**)  (***)
events/ee-emit.js listeners=1 argc=0 n=2000000          ***    -38.14 %       ±1.16% ±1.55% ±2.04%
events/ee-emit.js listeners=1 argc=10 n=2000000         ***    -37.24 %       ±2.68% ±3.60% ±4.77%
events/ee-emit.js listeners=1 argc=2 n=2000000          ***    -38.92 %       ±1.45% ±1.93% ±2.53%
events/ee-emit.js listeners=1 argc=4 n=2000000          ***    -38.53 %       ±0.72% ±0.96% ±1.26%
events/ee-emit.js listeners=10 argc=0 n=2000000         ***     -6.64 %       ±1.30% ±1.75% ±2.29%
events/ee-emit.js listeners=10 argc=10 n=2000000         **     -1.17 %       ±0.69% ±0.92% ±1.20%
events/ee-emit.js listeners=10 argc=2 n=2000000         ***     -2.60 %       ±1.21% ±1.61% ±2.10%
events/ee-emit.js listeners=10 argc=4 n=2000000         ***     -4.50 %       ±1.35% ±1.79% ±2.34%
events/ee-emit.js listeners=5 argc=0 n=2000000          ***    -17.39 %       ±3.49% ±4.65% ±6.09%
events/ee-emit.js listeners=5 argc=10 n=2000000         ***     -5.96 %       ±2.06% ±2.76% ±3.62%
events/ee-emit.js listeners=5 argc=2 n=2000000           **     -5.54 %       ±3.73% ±5.00% ±6.58%
events/ee-emit.js listeners=5 argc=4 n=2000000          ***    -12.13 %       ±3.90% ±5.19% ±6.75%

Be aware that when doing many comparisons the risk of a false-positive result increases.
In this case, there are 12 comparisons, you can thus expect the following amount of false-positive results:
  0.60 false positives, when considering a   5% risk acceptance (*, **, ***),
  0.12 false positives, when considering a   1% risk acceptance (**, ***),
  0.01 false positives, when considering a 0.1% risk acceptance (***)

It is a great effort to take care of this problem and optimize this place. Thank you very much for showing this effort.

mertcanaltin avatar Jun 13 '25 15:06 mertcanaltin

I believe the original issue was not a good path for improving the performance effectively as also seen in the benchmark results. I am going to close this, since this does not seem like the right thing to do. Thank you for the PR nevertheless!

BridgeAR avatar Sep 12 '25 13:09 BridgeAR