metasploit-framework
metasploit-framework copied to clipboard
Difference in store_loot db behavior when connected remotely
Steps to reproduce
Noticed when testing https://github.com/rapid7/metasploit-framework/pull/17374
When loot is stored, it persists locally:
https://github.com/rapid7/metasploit-framework/blob/37fe3b909a298e46db21380ebcda7965ad6fe492/lib/msf/core/auxiliary/report.rb#L417-L430
But when the ccache content and file name is sent to the remote DB, it creates an new path to stop naming conflicts:
https://github.com/rapid7/metasploit-framework/blob/04e8752b9b74cbaad7cb0ea6129c90e3172580a2/lib/msf/core/web_services/servlet/loot_servlet.rb#L39-L45
So the return result of store_loot
is the locally chosen name:
/Users/user/metasploit-framework/spec/dummy/framework/config/loot/20221218123340_default_192.0.2.2_mit.kerberos.cca_549283.bin"
Whilst querying for loot shows the path as having a different prefix:
/Users/user/metasploit-framework/spec/dummy/framework/config/loot/c90fd2477aff93d8769f-20221218123340_default_192.0.2.2_mit.kerberos.cca_549283.bin
Expected behavior
I believe store_loot
should generate a local file name - and send it to the remote db webservice.
We shuold then persist the loot contents locally using the chosen webservice file name.
Current behavior
What happens instead?
the result of store_loot
differs to future loot queries
Metasploit version
msf6 auxiliary(scanner/winrm/winrm_login) > version
Framework: 6.2.31-dev-6f9ebe4068
Console : 6.2.31-dev-6f9ebe4068
Hi!
This issue has been left open with no activity for a while now.
We get a lot of issues, so we currently close issues after 60 days of inactivity. It’s been at least 30 days since the last update here. If we missed this issue or if you want to keep it open, please reply here. You can also add the label "not stale" to keep this issue open!
As a friendly reminder: the best way to see this issue, or any other, fixed is to open a Pull Request.
Thanks for your contribution to Metasploit Framework! We've looked at this issue, and unfortunately we do not currently have the bandwidth to prioritize this issue.
We've labeled this as attic
and closed it for now. If you believe this issue has been closed in error, or that it should be prioritized, please comment with additional information.