jdk
jdk copied to clipboard
8293170: Improve encoding of the debuginfo nmethod section
The nmethod "scopes data" section is 10% of the size of nmethod. Now the data is compressed using the Pack200 algorithm, which is good for encoding small integers (LineNumberTable, etc). Using the fact that half of the data in the partition contains zeros, I reduce its size by another 30%.
Testing: jtreg hotspot&jdk, Renaissance benchmarks
Progress
- [ ] Change must be properly reviewed (1 review required, with at least 1 Reviewer)
- [x] Change must not contain extraneous whitespace
- [x] Commit message must refer to an issue
Issue
- JDK-8293170: Improve encoding of the debuginfo nmethod section
Reviewers
- @Mkkebe (no known github.com user name / role)
Reviewing
Using git
Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk pull/10025/head:pull/10025
$ git checkout pull/10025
Update a local copy of the PR:
$ git checkout pull/10025
$ git pull https://git.openjdk.org/jdk pull/10025/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 10025
View PR using the GUI difftool:
$ git pr show -t 10025
Using diff file
Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/10025.diff
:wave: Welcome back bulasevich! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.
@bulasevich The following label will be automatically applied to this pull request:
hotspot-compiler
When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.
@bulasevich this pull request can not be integrated into master due to one or more merge conflicts. To resolve these merge conflicts and update this pull request you can run the following commands in the local repository for your personal fork:
git checkout CompressedStream
git fetch https://git.openjdk.org/jdk master
git merge FETCH_HEAD
# resolve conflicts and follow the instructions given by git merge
git commit -m "Merge master"
git push
I believe this change is JVMCI agnostic since it does not change the API of DebugInformationRecorder.
I believe this change is JVMCI agnostic since it does not change the API of DebugInformationRecorder.
Right. Thank you!
Webrevs
- 17: Full - Incremental (b0323003)
- 16: Full - Incremental (3b9f84e0)
- 15: Full (e9269942)
- 14: Full (75ae5808)
- 13: Full (1d6cdb73)
- 12: Full - Incremental (477609da)
- 11: Full - Incremental (a24683d9)
- 10: Full - Incremental (1135bac4)
- 09: Full - Incremental (3ceefe68)
- 08: Full - Incremental (e5f03dda)
- 07: Full - Incremental (99522fd0)
- 06: Full - Incremental (637c94be)
- 05: Full - Incremental (f365d780)
- 04: Full - Incremental (a461a10e)
- 03: Full - Incremental (c2054359)
- 02: Full - Incremental (fa82262c)
- 01: Full (c2e05c89)
- 00: Full (0de1526c)
What is the performance impact of making several of the methods virtual?
My builds failed:
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/code/CompressedSparceDataReadStream.java:42: warning: [lossy-conversions] implicit cast from int to byte in compound assignment is possibly lossy
b |= (0xFF & curr_byte_) >> (8 - byte_pos_);
^
@bulasevich Please do not rebase or force-push to an active PR as it invalidates existing review comments. All changes will be squashed into a single commit automatically when integrating. See OpenJDK Developers’ Guide for more information.
What is the performance impact of making several of the methods virtual?
Good question! My experiments show that in the worst case, the performance of the debug write thread is reduced by 424->113 MB/s with virtual functions. Compared to compile time, this is miserable: сompilation takes 1000ms per method, while generation of 300 bytes of scopes data with virtual function (worst case) takes 3ms. And I do not see any regression with benchmarks.
My builds failed:
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/code/CompressedSparceDataReadStream.java:42: warning: [lossy-conversions] implicit cast from int to byte in compound assignment is possibly lossy b |= (0xFF & curr_byte_) >> (8 - byte_pos_); ^
Yes. It was a collision with "8244681: Add a warning for possibly lossy conversion in compound assignments" change. I rebased my branch and fixed the issue. Now it should be Ok.
@bulasevich, it would be useful to have some examples of cases either in the JBS issues or the PR.
What is the performance impact of making several of the methods virtual?
Good question! My experiments show that in the worst case, the performance of the debug write thread is reduced by 424->113 MB/s with virtual functions. Compared to compile time, this is miserable: сompilation takes 1000ms per method, while generation of 300 bytes of scopes data with virtual function (worst case) takes 3ms. And I do not see any regression with benchmarks.
I was wondering more about read performance. I would expect that the debuginfo could be read many more times than it is written. Also, from 424 to 113 seems like a very large slowdown.
What is the performance impact of making several of the methods virtual?
Good question! My experiments show that in the worst case, the performance of the debug write thread is reduced by 424->113 MB/s with virtual functions. Compared to compile time, this is miserable: сompilation takes 1000ms per method, while generation of 300 bytes of scopes data with virtual function (worst case) takes 3ms. And I do not see any regression with benchmarks.
I was wondering more about read performance. I would expect that the debuginfo could be read many more times than it is written. Also, from 424 to 113 seems like a very large slowdown.
Right. With counters in virtual methods, I see that reading debug information is less frequent than writing. Anyway. Let me rewrite code without virtual functions.
@bulasevich,
Could you please add gtest unit tests checking CompressedSparseDataWriteStream/CompressedSparseDataReadStream?
@bulasevich, BTW, why do we have a lot of 0s? Is it a normal situation?
@bulasevich Please do not rebase or force-push to an active PR as it invalidates existing review comments. All changes will be squashed into a single commit automatically when integrating. See OpenJDK Developers’ Guide for more information.
@bulasevich Please do not rebase or force-push to an active PR as it invalidates existing review comments. All changes will be squashed into a single commit automatically when integrating. See OpenJDK Developers’ Guide for more information.
@bulasevich, Could you please add gtest unit tests checking
CompressedSparseDataWriteStream/CompressedSparseDataReadStream?
Yes. Thanks
@bulasevich Please do not rebase or force-push to an active PR as it invalidates existing review comments. All changes will be squashed into a single commit automatically when integrating. See OpenJDK Developers’ Guide for more information.
@bulasevich Please do not rebase or force-push to an active PR as it invalidates existing review comments. All changes will be squashed into a single commit automatically when integrating. See OpenJDK Developers’ Guide for more information.
@bulasevich Please do not rebase or force-push to an active PR as it invalidates existing review comments. All changes will be squashed into a single commit automatically when integrating. See OpenJDK Developers’ Guide for more information.
@dean-long @vnkozlov I eliminated virtual methods, changed the implementation internals and added tests. Let me once again ask you to review. Thanks
@bulasevich This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration!
@bulasevich This pull request has been inactive for more than 8 weeks and will now be automatically closed. If you would like to continue working on this pull request in the future, feel free to reopen it! This can be done using the /open pull request command.