aedes icon indicating copy to clipboard operation
aedes copied to clipboard

chore: reduce production dependencies

Open seriousme opened this issue 3 months ago • 12 comments

@robertsLando As promised, this PR reduces the number of production dependencies to a bare minimum.

This PR replaces:

  1. "end-of-stream" by native "finished" from "node:stream"
  2. "fastfall" by a local "runfall"
  3. "fastparallel": by a local "runParallel"
  4. "fastseries": by a local "runSeries"
  5. "hyperid": by "randomUUID" from 'node:crypto' , hyperid might be faster (25M ops/second vs 12M ops/second) but also creates predictable ID's and since we only used "hyperid" to generate clientID's 12M ops/second should still be enough imo)
  6. "retimer": by a local "retimer" which is nearly as fast as the original "retimer" module with a difference of 1 second on 1M invocations.
  7. "reusify": by a local "ObjectPool" (btw: reusify was only used on "Enquers" and I am not sure how much performance this actually added as V8 has optimized object creation over time, but just to be sure I put in the ObjectPool)
  8. "uuid": by "randomUUID" from 'node:crypto' as it turned out that on nodeJS the "uuidv4" of "uuid" it is an alias for "randomUUID" from 'node:crypto' anyway.

The local variants listed above are all quite small, mostly because they only need to support one variant instead of the many variants that some external modules provide. Also over the years some patterns have become more easy to implement with modern Javascript.

This PR tries to keep the code changes to a minimum. From here on, further optimizations might be possible, but I didn't want to make this PR too complex.

Kind regards, Hans

seriousme avatar Sep 02 '25 18:09 seriousme

Codecov Report

:white_check_mark: All modified and coverable lines are covered by tests. :white_check_mark: Project coverage is 99.72%. Comparing base (2c6882e) to head (3a35a63).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1045      +/-   ##
==========================================
+ Coverage   99.44%   99.72%   +0.28%     
==========================================
  Files          15       15              
  Lines        1797     1837      +40     
==========================================
+ Hits         1787     1832      +45     
+ Misses         10        5       -5     
Flag Coverage Δ
unittests 99.72% <100.00%> (+0.28%) :arrow_up:

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

:rocket: New features to boost your workflow:
  • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

codecov[bot] avatar Sep 02 '25 18:09 codecov[bot]

Automated Benchmark testing suggests more than 10℅ perf decrease. I will try to figure out what is causing this.

Kind regards, Hans

seriousme avatar Sep 02 '25 19:09 seriousme

It turns out the difference was actually positive instead of negative, tested on Codespaces using 4 cores.


Overall Benchmark Results

+x% is better, -x% is worse, current threshold to fail at -10%

Label Benchmark Config Average Units Percentage
main sender.js QoS=0, Cores=4 107096 msg/s 100%
reduce-production-dependencies sender.js QoS=0, Cores=4 124161 msg/s +15.93%
main receiver.js QoS=0, Cores=4 108229 msg/s 100%
reduce-production-dependencies receiver.js QoS=0, Cores=4 124055 msg/s +14.62%
main sender.js QoS=1, Cores=4 53724 msg/s 100%
reduce-production-dependencies sender.js QoS=1, Cores=4 56895 msg/s +5.90%
main receiver.js QoS=1, Cores=4 53727 msg/s 100%
reduce-production-dependencies receiver.js QoS=1, Cores=4 56893 msg/s +5.89%
main pingpong.js QoS=1, Cores=4, Score='perc95' 41 ms 100%
reduce-production-dependencies pingpong.js QoS=1, Cores=4, Score='perc95' 41 ms 100%

Benchmark Results for main

Benchmark Config Units Round 1 Round 2 Round 3 Round 4 Round 5 Round 6 Round 7 Round 8 Round 9 Round 10
sender.js QoS=0, Cores=4 msg/s 104709 102963 100781 116193 107521 111121 108203 98121 126638 94705
receiver.js QoS=0, Cores=4 msg/s 113003 105290 108320 102153 100046 112616 112741 107542 108220 112359
sender.js QoS=1, Cores=4 msg/s 50461 53588 54380 55006 52976 55709 54517 55805 51788 53013
receiver.js QoS=1, Cores=4 msg/s 50489 53579 54377 55014 52977 55703 54515 55809 51784 53020
pingpong.js QoS=1, Cores=4, Score='perc95' ms 41 41 41 41 41 41 41 41 41 41

Benchmark Results for reduce-production-dependencies

Benchmark Config Units Round 1 Round 2 Round 3 Round 4 Round 5 Round 6 Round 7 Round 8 Round 9 Round 10
sender.js QoS=0, Cores=4 msg/s 116256 119349 122598 124305 135292 132318 116559 117091 120768 137072
receiver.js QoS=0, Cores=4 msg/s 124809 116053 118338 131602 118724 137263 133331 119014 119005 122410
sender.js QoS=1, Cores=4 msg/s 56878 57472 56176 57224 56130 56366 56299 56725 58094 57588
receiver.js QoS=1, Cores=4 msg/s 56853 57509 56168 57195 56178 56317 56341 56680 58099 57590
pingpong.js QoS=1, Cores=4, Score='perc95' ms 41 41 41 41 41 41 41 41 41 41

seriousme avatar Sep 02 '25 20:09 seriousme

A fix for the benchmarking is in https://github.com/moscajs/aedes/pull/1046

Kind regards, Hans

seriousme avatar Sep 03 '25 05:09 seriousme

@robertsLando

  • The objectPool has been removed leading to no material performance difference HEAD is still approx 18% faster than MAIN
  • timer.refresh() has been applied (I didn't know it either) and in future iterations we might be able to remove the whole retimer from lib/utils and use timer.refresh directly
  • clearWills has been completely refactored. Imo it is now more clear to see what the code actually does. The only difference in behaviour between the version in MAIN and this version is that the pipeline() can create variable batch sizes depending on how fast wills were fetched from persistence and how fast they can be processed. In practice my guess is that the fetch from persistence will always be faster than the processing which would result in batch sizes of the highwatermark of the Writable. Hence my choice to use a fixed batch size to keep the logic simple.

Kind regards, Hans

seriousme avatar Sep 05 '25 14:09 seriousme

@robertsLando I also added a test to test/wills.js to:

  • show that the new clearwills mechanism works with a lager number of wills
  • increase test coverage

seriousme avatar Sep 06 '25 11:09 seriousme

@robertsLando also updated handlers/subscribe.js: as completeSubscribe () was no longer called with an 'err' parameter, the if (err) would never be called decreasing coverage.

The only file without 100% coverage is now client.js because of :

https://github.com/moscajs/aedes/blob/9b76359db76a70d5e5160cf340673644387ca57b/lib/client.js#L377-L379

which we can't test and therefore should imo exclude from coverage testing.

And legacy support in:

https://github.com/moscajs/aedes/blob/9b76359db76a70d5e5160cf340673644387ca57b/lib/client.js#L169-L172

state.buffer is already EOL since NodeJS 14,see https://nodejs.org/api/deprecations.html#DEP0003

The associated drain request: https://github.com/moscajs/aedes/blob/9b76359db76a70d5e5160cf340673644387ca57b/lib/client.js#L362-L364 is never called according to coverage testing.

Calling nodejs private API's like this.conn._writableState.getBuffer() should not be required, so Iḿ really wondering whether we should not leave this to conn.destroy().

Kind regards, Hans

seriousme avatar Sep 06 '25 13:09 seriousme

FYI: with all the changes the github review comments are a bit hard to track. I think everything is in, if I missed anything please let me know. I will try to benchmark batch() against https://github.com/mcollina/hwp/blob/main/index.js in the coming days.

seriousme avatar Sep 07 '25 18:09 seriousme

@robertsLando @mcollina Benchmark had been done, see https://github.com/moscajs/aedes/pull/1045#discussion_r2334328190 Can you please review again ?

Kind regards, Hans

seriousme avatar Sep 09 '25 17:09 seriousme

@mcollina ping

robertsLando avatar Sep 24 '25 11:09 robertsLando

Just to be sure: you all are not waiting for me right?

seriousme avatar Nov 08 '25 18:11 seriousme

@seriousme nope, I'm waiting for @mcollina last feedback after your changes after his review :)

robertsLando avatar Nov 10 '25 11:11 robertsLando