light-my-request
light-my-request copied to clipboard
migrate to class
migrate to class syntax (Chain, request, response)
bun.js support update usage of deprecated attribute
Checklist
- [X] run
npm run testandnpm run benchmark - [X] tests and/or benchmarks are included
- [X] documentation is changed or added (.d.ts)
- [X] commit message and code follows the Developer's Certification of Origin and the Code of conduct
filenames should be lowercased and kebabcased.
please provide a benchmark, so that we can compare potential performance gains or losses. We use afaik light-my-request for fastify.inject. Some people use inject in their productive solutions.
is there a reason, we can not use prototypical notation?
Okey I will change filename, but now you also have configValidator.js and parseURL.js
❯ npm run benchmark
> [email protected] benchmark
> node test/benchmark.js
Request x 269,265 ops/sec ±3.62% (78 runs sampled)
Custom Request:
Request With Cookies x 205,368 ops/sec ±2.22% (84 runs sampled)
Request With Cookies n payload x 169,358 ops/sec ±4.85% (77 runs sampled)
ParseUrl x 702,748 ops/sec ±9.31% (60 runs sampled)
ParseUrl and query x 99,315 ops/sec ±4.00% (75 runs sampled)
Fastest is: ParseUrl
❯ npm run benchmark
> [email protected] benchmark
> node test/benchmark.js
Request x 238,129 ops/sec ±5.85% (71 runs sampled)
Custom Request:
Request With Cookies x 169,174 ops/sec ±4.09% (78 runs sampled)
Request With Cookies n payload x 189,661 ops/sec ±2.57% (82 runs sampled)
ParseUrl x 1,263,631 ops/sec ±1.82% (89 runs sampled)
ParseUrl and query x 99,157 ops/sec ±3.26% (81 runs sampled)
Fastest is: ParseUrl
Which of these benchmarks is default branch and which one is this PR?
is there a reason, we can not use prototypical notation?
Bun.js implemented ServerResponse through a class that does not allow to call http.ServerResponse.call(this, ...)
Good argument. Rename the files and make them lower and kebab, please... ;)
Which of these benchmarks is default branch and which one is this PR?
Both are PR
Below is the default branch
❯ npm run benchmark
> [email protected] benchmark
> node test/benchmark.js
Request x 286,519 ops/sec ±2.13% (88 runs sampled)
Custom Request x 19,788 ops/sec ±2.32% (84 runs sampled)
Request With Cookies x 204,017 ops/sec ±2.81% (79 runs sampled)
Request With Cookies n payload x 200,765 ops/sec ±2.78% (79 runs sampled)
ParseUrl x 1,001,307 ops/sec ±4.34% (78 runs sampled)
ParseUrl and query x 115,094 ops/sec ±3.48% (83 runs sampled)
Fastest is: ParseUrl
It seems that customrequest benchmark is broken. Can you have a look at it?
PR
❯ npm run benchmark
> [email protected] benchmark
> node test/benchmark.js
Request x 237,109 ops/sec ±4.16% (75 runs sampled)
Custom Request x 14,722 ops/sec ±6.88% (76 runs sampled)
Request With Cookies x 169,585 ops/sec ±4.24% (79 runs sampled)
Request With Cookies n payload x 159,198 ops/sec ±4.37% (79 runs sampled)
ParseUrl x 1,109,862 ops/sec ±4.22% (80 runs sampled)
ParseUrl and query x 113,048 ops/sec ±3.30% (84 runs sampled)
Fastest is: ParseUrl
Node: 16 master:
Request x 99,483 ops/sec ±0.68% (92 runs sampled)
Custom Request x 16,117 ops/sec ±1.18% (89 runs sampled)
Request With Cookies x 85,190 ops/sec ±0.50% (93 runs sampled)
Request With Cookies n payload x 78,458 ops/sec ±0.38% (91 runs sampled)
ParseUrl x 144,678 ops/sec ±0.28% (98 runs sampled)
ParseUrl and query x 113,094 ops/sec ±0.16% (97 runs sampled)
Fastest is: ParseUrl
master:
Request x 96,878 ops/sec ±0.61% (91 runs sampled)
Custom Request x 15,772 ops/sec ±1.52% (88 runs sampled)
Request With Cookies x 85,016 ops/sec ±0.45% (95 runs sampled)
Request With Cookies n payload x 79,598 ops/sec ±0.47% (91 runs sampled)
ParseUrl x 143,810 ops/sec ±0.25% (95 runs sampled)
ParseUrl and query x 109,757 ops/sec ±0.86% (95 runs sampled)
Fastest is: ParseUrl
Node: 18 master:
Request x 217,305 ops/sec ±1.82% (86 runs sampled)
Custom Request x 13,217 ops/sec ±2.60% (83 runs sampled)
Request With Cookies x 171,212 ops/sec ±1.29% (86 runs sampled)
Request With Cookies n payload x 155,925 ops/sec ±2.72% (79 runs sampled)
ParseUrl x 876,693 ops/sec ±1.39% (85 runs sampled)
ParseUrl and query x 88,747 ops/sec ±1.85% (81 runs sampled)
Fastest is: ParseUrl
master:
Request x 224,909 ops/sec ±2.56% (78 runs sampled)
Custom Request x 12,123 ops/sec ±2.13% (82 runs sampled)
Request With Cookies x 161,811 ops/sec ±2.18% (83 runs sampled)
Request With Cookies n payload x 150,949 ops/sec ±2.08% (83 runs sampled)
ParseUrl x 834,933 ops/sec ±1.32% (85 runs sampled)
ParseUrl and query x 91,471 ops/sec ±1.97% (84 runs sampled)
Fastest is: ParseUrl
Node: 20 master:
Request x 256,783 ops/sec ±1.25% (88 runs sampled)
Custom Request x 16,408 ops/sec ±1.69% (87 runs sampled)
Request With Cookies x 189,893 ops/sec ±0.75% (89 runs sampled)
Request With Cookies n payload x 178,208 ops/sec ±0.83% (90 runs sampled)
ParseUrl x 998,247 ops/sec ±0.89% (91 runs sampled)
ParseUrl and query x 95,531 ops/sec ±0.66% (92 runs sampled)
Fastest is: ParseUrl
master:
Request x 246,866 ops/sec ±1.39% (91 runs sampled)
Custom Request x 16,323 ops/sec ±1.49% (84 runs sampled)
Request With Cookies x 189,552 ops/sec ±0.81% (89 runs sampled)
Request With Cookies n payload x 171,669 ops/sec ±0.78% (91 runs sampled)
ParseUrl x 968,401 ops/sec ±0.56% (91 runs sampled)
ParseUrl and query x 94,373 ops/sec ±0.43% (88 runs sampled)
Fastest is: ParseUrl
I really appreciate the effort to make more libraries work in Bun but IMO, we should strongly default to fix things in Bun instead of libraries whenever possible. Its our fault when it doesn't work, not the library's fault. We can make some tweaks to continue to use classes in Bun internally without breaking libraries. I also think it's important for changes like this to have no negative performance impact and it's hard to tell from the benchmarks posted
I'm somewhat skeptical of this change. Why does Bun need classes? All of Node.js is implemented using prototypical inheritance anyway.
Why does Bun need classes?
- we use class private properties to make it harder to leak internals to userland code. Symbols can work for this too but not as well
- when classes are implemented in native code, the JS usage has to be constructed or brand checks wont work
Why are you skeptical? Do you have any reason to use a prototype instead of extending a class? it's better for reading, and looks healthier also as mentioned above, it allows the parent class to use private properties
@Jarred-Sumner I with we could do that for Node.js without breaking a good chunk of npm.
My 2 cents is that this is something you want to fix on the Bun side because this won't definitely be the only module with this problem.
Oh, of course this is just one of the reasons for this PR but not the only one, i would just like to improve this library by not using deprecated syntax here is an issue I created a day ago https://github.com/oven-sh/bun/issues/4415
Prototypical inheritance has not been deprecated anywhere in the Node.js codebase.
Nor has it been "deprecated" from the language. It is the language. The class keyword is sugar.
I would argue that classes are a syntax sugar for a reason, though, they are easier to read and modify than prototype-inheritance based code. If there is no perf degradation or a negligible one, I would be +1 on this change, as maintainability is important.
Since light-my-request is not even a runtime dependency, nanooptimizations don't matter as much.
I tend to stay away from massive changes that are non-functional: they make maintaining code harder long term (for backports and other things) and they can introduce subtle bugs.
So, are there any terms or any future plans for this PR? if there are any problems I can break this PR into smaller parts
How about you provide benchmarks which show that there is no big regression...
main
┌─────────┬──────────────────────────────────┬─────────────┬────────────────────┬───────────┬─────────┐
│ (index) │ Task Name │ ops/sec │ Average Time (ns) │ Margin │ Samples │
├─────────┼──────────────────────────────────┼─────────────┼────────────────────┼───────────┼─────────┤
│ 0 │ 'Request' │ '329,450' │ 3035.358920737361 │ '±1.85%' │ 164726 │
│ 1 │ 'Custom Request' │ '23,323' │ 42874.42253575769 │ '±3.09%' │ 11662 │
│ 2 │ 'Request With Cookies' │ '203,543' │ 4912.965682345484 │ '±1.81%' │ 101772 │
│ 3 │ 'Request With Cookies n payload' │ '209,126' │ 4781.787882428501 │ '±2.12%' │ 104564 │
│ 4 │ 'ParseUrl' │ '1,000,530' │ 999.4697091393407 │ '±0.50%' │ 500266 │
│ 5 │ 'ParseUrl and query' │ '115,917' │ 8626.819576562057 │ '±0.45%' │ 57959 │
│ 6 │ 'read request body JSON' │ '80,225' │ 12464.847208492907 │ '±1.41%' │ 40113 │
│ 7 │ 'read request body buffer' │ '118,589' │ 8432.443780856425 │ '±1.58%' │ 59295 │
│ 8 │ 'read request body readable' │ '59,789' │ 16725.446681331126 │ '±3.17%' │ 29895 │
│ 9 │ 'Response write end' │ '32,769' │ 30515.789490012376 │ '±11.38%' │ 16385 │
│ 10 │ 'Response writeHead end' │ '31,237' │ 32013.13887242967 │ '±24.60%' │ 15619 │
│ 11 │ 'base inject' │ '18,700' │ 53474.86442841651 │ '±14.01%' │ 9367 │
└─────────┴──────────────────────────────────┴─────────────┴────────────────────┴───────────┴─────────┘
pr
┌─────────┬──────────────────────────────────┬─────────────┬────────────────────┬───────────┬─────────┐
│ (index) │ Task Name │ ops/sec │ Average Time (ns) │ Margin │ Samples │
├─────────┼──────────────────────────────────┼─────────────┼────────────────────┼───────────┼─────────┤
│ 0 │ 'Request' │ '288,357' │ 3467.918693395486 │ '±1.80%' │ 144179 │
│ 1 │ 'Custom Request' │ '18,529' │ 53968.52044884218 │ '±3.18%' │ 9265 │
│ 2 │ 'Request With Cookies' │ '209,731' │ 4768.001047761391 │ '±2.16%' │ 104866 │
│ 3 │ 'Request With Cookies n payload' │ '209,283' │ 4778.204037601417 │ '±1.84%' │ 104642 │
│ 4 │ 'ParseUrl' │ '1,163,778' │ 859.2698052078658 │ '±0.46%' │ 581890 │
│ 5 │ 'ParseUrl and query' │ '125,576' │ 7963.26355244089 │ '±0.34%' │ 62789 │
│ 6 │ 'read request body JSON' │ '82,015' │ 12192.827821471206 │ '±1.48%' │ 41008 │
│ 7 │ 'read request body buffer' │ '107,069' │ 9339.767187469155 │ '±1.14%' │ 53535 │
│ 8 │ 'read request body readable' │ '56,603' │ 17666.900750931713 │ '±2.63%' │ 28302 │
│ 9 │ 'Response write end' │ '42,223' │ 23683.483707787367 │ '±9.79%' │ 21112 │
│ 10 │ 'Response writeHead end' │ '45,347' │ 22051.888650213063 │ '±10.48%' │ 22674 │
│ 11 │ 'base inject' │ '18,343' │ 54516.203876144486 │ '±16.42%' │ 9172 │
└─────────┴──────────────────────────────────┴─────────────┴────────────────────┴───────────┴─────────┘
Some asynchronous tests have become faster
@sirenkovladd Request and Custom Request seem slower, though. Is that consistent? Is it clear why it's happening?
CustomRequest - I'm not sure, but I believe that the constructor from parameters was not called in master
Request - difference from 3035ns to 3467ns, it seems not so critical
@sirenkovladd Can you regenerate the benchmark after the fix?
This benchmark already after commit, I just didn't push before text
But if CustomRequest is important, I can go back to the old code or leave only one call constructor
If it was a prod runtime code, I would say that 2-3% perf difference is significant. For a test code, admittedly, less so. I'm neutral on this PR; I would say that class syntax is generally easier to maintain, but there is slight perf decrease.