raddebugger
raddebugger copied to clipboard
Crazy memory usage during .pdb parsing
I have noticed some problems with the initial parse of .pdb files causing allocation errors and system instability probably cause I'm working on a huge project and have a 48 thread machine and 64 gb of ram runs out pretty fast.
Once parsed its working as expected :) (and very fast!)
Yeah, this is definitely the primary blocker preventing it from being usable on gigantic projects like this, but glad to hear it worked after the conversion was... stumbled through!
While testing the converter on gigantic PDBs in the past (for correctness, more than profiling), I had expanded some of the hash tables to absurdly large sizes (to avoid extraordinarily long conversion times simply because of extremely poorly-sized hash tables, resulting in a very large number of collisions). As a result, the converter has been... well, allocating these absurdly large tables every time, for every PDB, of every size, and so it is not surprising you hit this!
In any case, I'm starting to work on this part of the problem. You should see some improvement with my recent commits, the latest being 44d9b57eb5e9e92aead85caae3c907a7c7923fff, but this should continue to improve over the next few weeks.
I'll be keeping this issue open & updating with progress as I make it.
Hah it would be awesome if the debugger would work for us! I'm happy to test anything you need :)
I can try that commit next week, where are the caches stored for .pdb's if I want a "clean" launch?
Great! I’ve made some more progress on this on the top of dev
.
You’ll just want to delete all of the .raddbg
files, stored right next to the PDBs.
Did a quick test today on latests and it seemed to be zero issues! 👏 Im gonna play around a bit more some other time, bit of a busy week
Amazing! No problem. I'll close this issue for now, but just let me know if you run into more issues.
I did a few things to win some extremely low-hanging-fruit, but the time performance of the converter is still very lacking for very large PDBs, so I wouldn't be surprised if you're running into that still (it is still a single-threaded reference implementation), but I'm working on that problem as we speak, so that should improve with time.