VisualVM Retaining sizes is stuck at 33%
Describe the bug I have a 6GB heap dump that I want to analyze. Basically find out where a lot of the sources of Memory Leaks come from. When starting the determine the Dominators. It goes through 3 progress bars. At the third one (Computing Retained Sizes) it gets stuck instantly on 33% when GC Sizes are computed. And while it seems to continue to work (after allocating 16GB on startup) it seems to barely use any resources. It uses 7% of my Entire CPU to do its work.
Is this normal? The progress bar "while improved" is basically still useless. (I remember a time where there was no % at all)
To Reproduce Steps to reproduce the behavior: Start Visual VM take any Heap Dump thats like 6-10GB large and try to determine the Dominators.
Expected behavior I guess be faster? And also progress communicated? Like "1000/432312 Files processed" Also maybe "performance options" that could be set to utilize system hardware a lot more?
VisualVM log Even ran through the command line there is 0 logs besides: "Program found another console was used so it will be using the console provided" logfile.txt
Is there is a way to get your heap dump, so we can investigate if this is normal behaviour or not. Thanks.
BTW: There is no need to run VisualVM with 16G Xmx.
@thurka that might take a while... Due to the size of the of it. (here the lines are <100kbit in download speeds and <10kbit upload) Also i found memory leak through other means so i kinda have to break my code again to get a heap dump that big again.
But yeah it should be possible. I just ask for some time.
(here the lines are <100kbit in download speeds and <10kbit upload)
Ouch!
But yeah it should be possible. I just ask for some time.
No problem. Make sure that you compress the heap dump before uploading (zip -9, gzip -9, bzip -9). The compression ration is very good for heap dumps.
Once you upload it, send me link via email. Thanks once gain.
@thurka sorry for the delay. I found a old file that was only thrown into the paper bin instead of being deleted. (Said file is one that was created with this exact issue) 5GB big. First tried uploading it, then saw "compression" would help. Now uploading a 777MB file with a 13kbit upload line. That is fine ^^"
Next comment will be the link :)
Thanks, I have the heap dump.
Thank you. I hope this helps improving VisualVM :)
@thurka anything interesting found with the heap dump?
Yes, there is instance java.util.concurrent.ConcurrentLinkedQueue, which has >300000 elements (lambdas from net.minecraft.world.chunk.listener.ChainedChunkStatusListener). These lambdas and corresponding java.util.concurrent.ConcurrentLinkedQueue$Node instances have very long paths to GC root - up to 300200. This together with fact lambdas reference other objects, causes the unusually long computation of retained size. I will try to seed it up.
I love minecraft sometimes xD The devs shred a tool to find performance issues/memory leaks without even knowing it xD
I have a similar (in terms of memory structure, not content) 1.8GB log file from an Akka-based application. This application makes extensive use of ConcurrentLinkedQueue$Node's. The problem for analysis here is that you need the queue info to workout where the real usage of them is and the only way to get this is to Compute Retained Size.
Unfortunately this involves traversing the entire queue I expect (100K nodes in memory across a number of queues), which VVM takes a long time to do. I assume it's reasonably single-threaded.
The percentage is going up over a long period (>24hrs) so I'm hopeful it will eventually finish but there are some improvements that could be made
Improvement Ideas
- Provide output to the logfile (About->Logfile) every X operations (where X is reasonably large but arbitrary)
- If possible provide a numeric indicator of how many are left (I assume the 97% is based on something)
- If possible, periodically save partial progress to a file, allowing it to be quit and resumed.
- Look at optimising this - maybe it can be parallelised more - potentially it would then be something I could chuck cloud hardware at to speed up a lot
Edit: Automatic updates restarted my PC and I have to start from scratch again 😭
I have a similar (in terms of memory structure, not content) 1.8GB log file from an Akka-based application.
1.8GB log file ?? Do you mean 1.8 GB heap dump? If so, it would be great if you can share it, so we can make sure that we improved your case. Thanks.
@Speiger, did you manage to complete the process after some time? Or was it always stuck at 33#. I am facing a similar issue and visualvm is running for the past 1 hour stuck at 33%
@sanket1729 Nope. I gave up and simply starte scanning my code and found my bugs.
Anyways using streams and lambda chains will basically kill Visual VM usability, which sadly becomes more common by the day. So @thurka would have to fix this overloading process somehow otherwise they can effectively mark it as "useless" because it gets overloaded instantly by large scale projects.