`find_aslr` in `linux.py` may return stale DTB/KASLR/ASLR values after soft reboot on CentOS 7
Describe the bug
On CentOS 7, when taking a memory dump after a soft reboot (via the reboot command, not a hard/power-off reboot), volatility3/framework/automagic/linux.py::find_aslr may return stale values for DTB, KASLR, and ASLR.
This happens intermittently—after a few soft reboots—and causes all Linux plugins to fail due to incorrect memory translation.
The issue seems to occur because the function grabs the first matching swapper signature, which can point to an outdated and unused task_struct left behind in memory due to the soft reboot.
Context
-
Volatility Version: 2.26.0
-
Operating System: Ubuntu 22.04
-
Python Version: 3.10
-
Suspected Guest Operating System: CentOS 7
-
Command:
python3 vol.py -f centos7_mem.raw linux.pslist
To Reproduce Steps to reproduce the behavior:
- Boot a CentOS 7 system
- Soft reboot multiple times using
reboot - After a few reboots, take a memory dump
- Run a Linux plugin like
linux.pslistafter each reboot and save DTB, ASLR, and KASLR values - Observe that DTB, ASLR, and KASLR values match a previous reboot, and plugins fail to produce output
Expected behavior The scanner should identify a valid and current DTB/ASLR/KASLR triple that reflects the active kernel and enables proper plugin functionality.
Example output
vollog.debug(f"Linux ASLR shift values determined: physical {kaslr_shift:0x} virtual {aslr_shift:0x}")
vollog.debug(f"DTB was found at: 0x{dtb:0x}")
These values match previous dumps and are incorrect, leading to plugin failures.
Additional information
-
The issue likely stems from this line in
find_aslr:swapper_signature = rb"swapper(\/0|\x00\x00)\x00\x00\x00\x00\x00\x00" -
Since a soft reboot doesn’t fully clear memory, the first match may be a stale copy of
swapper’stask_struct. -
Manually skipping the first match and using the next valid
task_structresolves the issue—the plugins then work correctly. -
A hard reboot fixes the problem entirely, suggesting memory cleanup wipes out stale data.
Suggested Fix
Introduce validation logic for the virtual layer using the DTB, before accepting the detected values. If access fails or plugin tests return invalid structures, fallback to the next match of the swapper signature in memory. Possibly loop through candidates until a working memory layer is constructed.
Thanks for reporting this. Do you have a sample that's affected you could share?
I've not yet been able to recreate this myself - if you can share a sample that would be very helpful.
It's also worth testing the very latest commit from the dev branch to see if that helps, I spotted in your info that you're using 2.26.0 which is the latest release but there will be improvements in the dev branch - you can download it here: https://github.com/volatilityfoundation/volatility3/archive/refs/heads/develop.zip
Uploaded the dump to: here
Steps to reproduce:
- Turn on the guest machine
- Login to user root and reboot the machine couple of times using the cli (Soft reboot)
- dump the memory using virsh
virsh dump centos7.0 /tmp/centos7-2 --live --memory-only - Then run volatility:
python3 vol.py -vvvvvv --remote-isf-url 'https://raw.githubusercontent.com/leludo84/vol3-linux-profiles/main/banners-isf.json' -f /tmp/centos7-2 linux.psaux
The new LinuxIntelVMCOREINFOStacker hides the problem and doesn't print the values of dtb, aslr, kaslr. I had to remove the new stacker and use the old one (LinuxIntelStacker) to investigate this issue. It's turned out that the values the stacker found are the correct values but for previous reboot
Make sure you clean volatility cache, before testing:
rm -rf /root/.cache
Debug output: DEBUG volatility3.framework.automagic.linux: Linux ASLR shift values determined: physical 8400000 virtual 2a000000 DEBUG volatility3.framework.automagic.linux: DTB was found at: 0xa010000 DETAIL 2 volatility3.framework.automagic.stacker: Stacked IntelLayer using LinuxIntelStacker
If we hard reset the machine and take another snapshot, the memory is cleared and everything back to normal, vol works again. I can reproduce this bug easily over qemu using centos7 clean install
Hello,
We are aware of this issue, but a proper fix is a bit complicated due to some Volatility 3 internals and how address stackers work. We do plan to have it fixed in the next 2 months or so though (by the next release). I will update here when a branch is ready for testing. Thank you for the report!
Great, thanks!
Hello,
We are aware of this issue, but a proper fix is a bit complicated due to some Volatility 3 internals and how address stackers work. We do plan to have it fixed in the next 2 months or so though (by the next release). I will update here when a branch is ready for testing. Thank you for the report!
Any updates?