gdb-pt-dump
gdb-pt-dump copied to clipboard
Allow filtering addresses while the page table is being parsed instead of after
My understanding of the code is that we first parse the entire page table, and then apply the filters. This makes sense when the purpose of the filters is just to limit the amount of information seen, but I need to filter out page ranges for performance issues.
I'm running a KASAN AArch64 image, which results in the following additional page table entries:
---[ Kasan shadow start ]---
0xffffffc000000000-0xffffffc004000000 64M PTE RW NX SHD AF UXN MEM/NORMAL
0xffffffc004000000-0xffffffc040000000 960M PMD
0xffffffc040000000-0xffffffc400000000 15G PGD
0xffffffc400000000-0xffffffc480800000 2056M PTE ro NX SHD AF UXN MEM/NORMAL
0xffffffc480800000-0xffffffc481000000 8M PMD
0xffffffc481000000-0xffffffc481399000 3684K PTE RW NX SHD AF UXN MEM/NORMAL
0xffffffc481399000-0xffffffc481400000 412K PTE
0xffffffc481400000-0xffffffc482000000 12M PMD
0xffffffc482000000-0xffffffc483001000 16388K PTE RW NX SHD AF UXN MEM/NORMAL
0xffffffc483001000-0xffffffc483200000 2044K PTE
0xffffffc483200000-0xffffffc4c0000000 974M PMD
0xffffffc4c0000000-0xffffffc7c0000000 12G PGD
0xffffffc7c0000000-0xffffffc7ebe00000 702M PMD
0xffffffc7ebe00000-0xffffffc7ebfee000 1976K PTE
0xffffffc7ebfee000-0xffffffc7ebfff000 68K PTE RW NX SHD AF UXN MEM/NORMAL
0xffffffc7ebfff000-0xffffffc800000000 327684K PTE ro NX SHD AF UXN MEM/NORMAL
---[ Kasan shadow end ]---
I don't really care about these, but gdb-pt-dump still tries to parse them, which ends up taking forever. I'm wondering if instead of filtering addresses after we've parsed the page table, we can skip addresses while the page table is being parsed, so I could specify a range that would completely skip this KASAN memory.
This is a good optimization idea. Would you like to write a patch so that the range gets considered during parsing?
Note that I'm slowly working on an improvement of the project to write the core code in Rust: https://github.com/martinradev/pt-dump With it, parsing large page tables is significantly faster.
Would you like to write a patch so that the range gets considered during parsing?
Unfortunately I don't have time at the moment, and not sure when I'd be able to look into it.
Note that I'm slowly working on an improvement of the project to write the core code in Rust: martinradev/pt-dump. With it, parsing large page tables is significantly faster.
Awesome! If you could either test that the KASAN use case is reasonably fast when measuring performance or eventually implement this feature request in that version, that would be great.