rippled
rippled copied to clipboard
Fast base58 codec
This algorithm is about an order of magnitude faster than the existing algorithm (about 10x faster for encoding and about 15x faster for decoding - including the double hash for the checksum). The algorithms use gcc's int128 (fast MS version will have to wait, in the meantime MS falls back to the slow code).
Can a test be made to run on !_MSC_VER
that ensures b58_fast::encodeBase58Token
gives the same results as b58_ref::encodeBase58Token
(et al.)?
@HowardHinnant Added tests in 0d7ee1122
Huh. Just FYI, I ran across a couple of MSVC intrinsics today: _udiv128
and _umul128
. They are both available in MSVC 2017 which is probably the oldest compiler MS compiler we might support. And, yeah, they are intrinsics, not data types, so the mapping would be a bit awkward.
If you can find a way to make this code work on all three platforms then it becomes much more likely we can remove the caching. Just a thought.
I rebased onto the latest develop and pushed a patch addressing comments.
@scottschurr FYI, you say: If you can find a way to make this code work on all three platforms then it becomes much more likely we can remove the caching. Just a thought.
If windows was the only thing holding us back from removing caching I'd be very comfortable removing the caching. We explicitly say windows is not meant for production.
@seelabs would you be able to satisfy Scott S's request (specifically, the need for "big paragraph(s) comment describing how and why the algorithm works")?
@intelliot Yes, I'll write something up.
@scottschurr This has been on my back-burner for a long time while I finished up sidechains, but I finally wrote up a description of the algorithm. Hopefully that helps.
(As of 2023-12-01, Howard has not reviewed the current version of this PR)
- Perf testing team will talk with PR author to learn which parts of functionality this is expected to impact
- Will devise targeted perf test cases based on that information
- If the impact is generic then existing baseline test cases may be sufficient.
@sophiax851
Internal tracker: RPFC-78
Squashed and rebased onto the latest develop branch
Perf team report:
A test case was set up with 25K accounts, 50 TPS, and run for 500 seconds.
Test 1: Run comparison tests for 2000 seconds with 50 TPS for 1 client handler (CH) server of API load for both encoders using 100K accounts, compare the results.
Test 2: Run test immediately after the 2000 seconds on the same accounts to see the cache effectiveness.
The test criteria is the first test to ensure there’s no performance response time regression for RPC calls if we change the encoder to fast base58.
Without cache, compared between the PR and rippled
2.0.1, the average response time for the account_info improves from 0.81 ms to 0.73 - that’s around ~10% increase.
Compared between non-cache and cache, for rippled
2.0.1, the response time is 0.81 vs 0.72. For fast base58 PR, the response time is 0.73 vs 0.66.
Codecov Report
Attention: Patch coverage is 78.26087%
with 60 lines
in your changes are missing coverage. Please review.
Project coverage is 61.65%. Comparing base (
97863e0
) to head (ffc2be6
). Report is 1 commits behind head on develop.
Additional details and impacted files
@@ Coverage Diff @@
## develop #4327 +/- ##
===========================================
+ Coverage 61.64% 61.65% +0.01%
===========================================
Files 804 806 +2
Lines 70642 70963 +321
Branches 36534 36686 +152
===========================================
+ Hits 43545 43753 +208
- Misses 19800 19854 +54
- Partials 7297 7356 +59
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Squashed and rebased. Will merge after CI runs