8355177: Speed up StringBuilder::append(char[]) via Unsafe::copyMemory
In BufferedReader.readLine and other similar scenarios, we need to use StringBuilder.append(char[]) to build the string.
For these scenarios, we can Unsafe.copyMemory instead of the character copy of the char-by-char loop to improve the speed.
@RogerRiggs completed the optimization when the encoder is LATIN1 in PR #24967. This PR continues to complete the optimization when the encoder is UTF16.
Progress
- [x] Change must not contain extraneous whitespace
- [x] Commit message must refer to an issue
- [ ] Change must be properly reviewed (2 reviews required, with at least 1 Reviewer, 1 Author)
Issue
- JDK-8355177: Speed up StringBuilder::append(char[]) via Unsafe::copyMemory (Enhancement - P4)
Reviewers
- Roger Riggs (@RogerRiggs - Reviewer) 🔄 Re-review required (review applies to 18463718)
Reviewing
Using git
Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/24773/head:pull/24773
$ git checkout pull/24773
Update a local copy of the PR:
$ git checkout pull/24773
$ git pull https://git.openjdk.org/jdk.git pull/24773/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 24773
View PR using the GUI difftool:
$ git pr show -t 24773
Using diff file
Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/24773.diff
Using Webrev
:wave: Welcome back swen! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.
@wenshao This change now passes all automated pre-integration checks.
ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.
After integration, the commit message for the final commit will be:
8355177: Speed up StringBuilder::append(char[]) via Unsafe::copyMemory
Reviewed-by: rriggs, rgiulietti
You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.
At the time when this comment was updated there had been 407 new commits pushed to the master branch:
- c239c0ab00196da8c7c5f6099c8189a778874588: 8362564: hotspot/jtreg/compiler/c2/TestLWLockingCodeGen.java fails on static JDK on x86_64 with AVX instruction extensions
- 0226c0298f5398c185db3df30ad35ee6022aab1b: 8364004: Expose VMError::controlledCrash via Whitebox
- 965b68107ffe1c1c988d4faf6d6742629407451b: 8358586: ZGC: Combine ZAllocator and ZObjectAllocator
- ... and 404 more: https://git.openjdk.org/jdk/compare/509105761492ced0ecdc91aae464dcd016e2a4d7...master
As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.
➡️ To integrate this PR with the above commit message to the master branch, type /integrate in a new comment.
@wenshao The following label will be automatically applied to this pull request:
core-libs
When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.
Below are the performance numbers on a MacBookPro M1 Max, showing a 112% speed increase when coder = LATIN1 and a 44.19% speed increase when coder = UTF16.
shell
git remote add wenshao [email protected]:wenshao/jdk.git
git fetch wenshao
# basline
git clone ee356d3af177877e2702db08a3b55d170a7e454c
make test TEST="micro:java.lang.StringBuilders.appendWithCharArray"
# current
git clone cd5137097b4a7be370cf60df9aa5000203ea99c0
make test TEST="micro:java.lang.StringBuilders.appendWithCharArray"
Performance Numbrers
-Benchmark Mode Cnt Score Error Units (baseline cd5137097b4)
-StringBuilders.appendWithCharArrayLatin1 avgt 15 33.039 ± 0.059 ns/op
-StringBuilders.appendWithCharArrayUTF16 avgt 15 19.977 ± 0.054 ns/op
+Benchmark Mode Cnt Score Error Units (current ee356d3af17)
+StringBuilders.appendWithCharArrayLatin1 avgt 15 15.533 ± 0.039 ns/op +112.70%
+StringBuilders.appendWithCharArrayUTF16 avgt 15 13.868 ± 0.053 ns/op +44.19%
Webrevs
/reviewers 2
@AlanBateman The total number of required reviews for this PR (including the jcheck configuration and the last /reviewers command) is now set to 2 (with at least 1 Reviewer, 1 Author).
This might be helpful combined with #21730.
This might be helpful combined with #21730.
That implies creating a copy of the chars:
private final void appendChars(CharSequence s, int off, int end) {
if (isLatin1()) {
byte[] val = this.value;
// ----- Begin of Experimental Section -----
char[] ca = new char[end - off];
s.getChars(off, end, ca, 0);
int compressed = StringUTF16.compress(ca, 0, val, count, end - off);
count += compressed;
off += compressed;
// ----- End of Experimental Section -----
for (int i = off, j = count; i < end; i++) {
char c = s.charAt(i);
if (StringLatin1.canEncode(c)) {
val[j++] = (byte)c;
} else {
count = j;
inflate();
// Store c to make sure sb has a UTF16 char
StringUTF16.putCharSB(this.value, j++, c);
count = j;
i++;
StringUTF16.putCharsSB(this.value, j, s, i, end);
count += end - i;
return;
}
}
} else {
StringUTF16.putCharsSB(this.value, count, s, off, end);
}
count += end - off;
}
While I do assume that it should faster to let machine code perform the copy and compression over letting Java code perform a char-by-char approach, to be sure there should be another benchmark to actually proof this claim.
This might be helpful combined with #21730.
That implies creating a copy of the chars:
private final void appendChars(CharSequence s, int off, int end) { if (isLatin1()) { byte[] val = this.value; // ----- Begin of Experimental Section ----- char[] ca = new char[end - off]; s.getChars(off, end, ca, 0); int compressed = StringUTF16.compress(ca, 0, val, count, end - off); count += compressed; off += compressed; // ----- End of Experimental Section ----- for (int i = off, j = count; i < end; i++) { char c = s.charAt(i); if (StringLatin1.canEncode(c)) { val[j++] = (byte)c; } else { count = j; inflate(); // Store c to make sure sb has a UTF16 char StringUTF16.putCharSB(this.value, j++, c); count = j; i++; StringUTF16.putCharsSB(this.value, j, s, i, end); count += end - i; return; } } } else { StringUTF16.putCharsSB(this.value, count, s, off, end); } count += end - off; }While I do assume that it should faster to let machine code perform the copy and compression over letting Java code perform a char-by-char approach, to be sure there should be another benchmark to actually proof this claim.
> char[] ca = new char[end - off];
Your code here has a memory allocation, which may cause slowdown
> char[] ca = new char[end - off];Your code here has a memory allocation, which may cause slowdown
This is exactly what I wanted to express with my posting.
> char[] ca = new char[end - off];Your code here has a memory allocation, which may cause slowdown
This is exactly what I wanted to express with my posting.
I agree with you that this PR can improve the performance of Reader's method int read(char[] cbuf, int off, int len), but may not help the performance of Reader::getChars.
> char[] ca = new char[end - off];Your code here has a memory allocation, which may cause slowdown
This is exactly what I wanted to express with my posting.
I agree with you that this PR can improve the performance of Reader's method
int read(char[] cbuf, int off, int len), but may not help the performance of Reader::getChars.
I have performed a JMH Benchmark to compare the code with and without the optimization, and the result is surprising:
Benchmark | (SIZE) | Mode | Cnt | Score | | Error | Units
25 (modified) | 2 | thrpt | 25 | 71277668 | ± | 5549555 | ops/s
25 (modified) | 128 | thrpt | 25 | 33916527 | ± | 2631800 | ops/s
25 (modified) | 1024 | thrpt | 25 | 4291498 | ± | 401636 | ops/s
25 (modified) | 8192 | thrpt | 25 | 419871 | ± | 63557 | ops/s
Benchmark | (SIZE) | Mode | Cnt | Score | | Error | Units
25 (original) | 2 | thrpt | 25 | 159882761 | ± | 2900397 | ops/s
25 (original) | 128 | thrpt | 25 | 24093787 | ± | 1706259 | ops/s
25 (original) | 1024 | thrpt | 25 | 3794393 | ± | 28097 | ops/s
25 (original) | 8192 | thrpt | 25 | 491340 | ± | 5569 | ops/s
Actually for appended lengths of 128...1024 characters, the modified case is faster. This means, the benefit of StringUTF16::compress in fact outperforms the penalty implied by new char[]! While for size 1024 chars the benefit is rather small, for size 128 it is huge: More than 40% gain in throughput! 🚀
I will repeat the benchmark with more steps in the range 2...1024 to see where the break-even-point is, so we effectively can enable the optimization in a performance-wise "safe" range. Stay tuned! 😃
@wenshao Why did you remove UTF16::compress from your proposal? Wasn't it providing the expected benefit?
@wenshao Why did you remove UTF16::compress from your proposal? Wasn't it providing the expected benefit?
StringUTF16::compress has been added in PR #24967
@wenshao This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply issue a /touch or /keepalive command to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration!
I just noticed there is LibraryCallKit::inline_string_getCharsU that is for byte -> char conversion. I wonder if we can slightly update it for char -> byte conversion.
I just noticed there is
LibraryCallKit::inline_string_getCharsUthat is for byte -> char conversion. I wonder if we can slightly update it for char -> byte conversion.
If the performance is the same, the current proposal to use Unsafe should be more maintainable, and the maintainer only needs to know Java. If intrinsics are used, the maintainer needs to know C++, Java, and intrinsic mechanisms.
I consulted @rose00 - Intrinsics are costly, so a next step might be removing the intrinsic for toBytes (which allocates a new byte array, unsuitable for here) and getChars in UTF16. This looks like a right direction to move along.
/integrate
Going to push as commit e2feff85995cf2d0b8ecc2262cf4e74b74de3e31.
Since your change was applied there have been 422 commits pushed to the master branch:
- 16da81eb439e48459e4ca19d6f97c0de5e2d2398: 8360817: [ubsan] zDirector select_worker_threads - outside the range of representable values issue
- c8517356314c9dd1123401a21968009066053e5b: 8364115: Sort share/services includes
- 317dacc308993d534aeba397d0550ad056fe595b: 8364159: Shenandoah assertions after JDK-8361712
- ... and 419 more: https://git.openjdk.org/jdk/compare/509105761492ced0ecdc91aae464dcd016e2a4d7...master
Your commit was automatically rebased without conflicts.
@wenshao Pushed as commit e2feff85995cf2d0b8ecc2262cf4e74b74de3e31.
:bulb: You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.