Add taskhound module for Windows scheduled task enumeration
Description
This PR aims at introducing a new module "taskhound" to enumerate and categorize Scheduled Tasks on remote systems.
Key Features:
-
Identifies tasks running with privileged accounts
-
Detects tasks with stored credentials vs token-based logon
-
Includes comprehensive filtering options (exclude default tasks under \Windows\ and Default Local SIDs like S-1-5-18, etc. unless enabled via option
-
Dual BloodHound format support + Auto-Detect (Legacy + BHCE)
-
Rudimentary Tier 0 detection with AdminSDHolder and isTierZero flags for BHCE, SID Mapping for Legacy
-
Password age analysis for DPAPI dump viability
-
Output options (plain,csv,json)
-
Backup Functionality to save raw XMLs
-
Language-independent group membership analysis (Because I was really dumb earlier. Languages change, SIDs are eternal)
-
See https://github.com/1r0BIT/TaskHound for original Repo (And some more features)
Type of change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Deprecation of feature or functionality
- [x] This change requires a documentation update
- [ ] This requires a third party update (such as Impacket, Dploot, lsassy, etc)
Setup guide for the review
Setup only requires a (domain joined) windows machine with a few Scheduled Tasks to test features. 1x Task from a High Value User with Stored Creds 1x Task from a non High Value User with Stored Creds 1x Any Scheduled Task without Stored Creds
You can generate a suitable export for the json/csv parsing using the following cyphers:
BHCE (JSON only):
MATCH (n)
WHERE coalesce(n.system_tags, "") CONTAINS "admin_tier_0"
OR n.highvalue = true
MATCH p = (n)-[:MemberOf*1..]->(g:Group)
RETURN p;
Legacy (JSON only because of all_props):
MATCH (u:User {highvalue:true})
OPTIONAL MATCH (u)-[:MemberOf*1..]->(g:Group)
WITH u, properties(u) as all_props, collect(g.name) as groups, collect(g.objectid) as group_sids
RETURN u.samaccountname AS SamAccountName, all_props, groups, group_sids
ORDER BY SamAccountName
Then just export as csv or json and feed it to taskhound.
Screenshots (if appropriate):
Checklist:
- [x] I have ran Ruff against my changes (via poetry:
poetry run python -m ruff check . --preview, use--fixto automatically fix what it can) - [x] I have added or updated the
tests/e2e_commands.txtfile if necessary (new modules or features are required to be added to the e2e tests) - [x] New and existing e2e tests pass locally with my changes
- [ ] If reliant on changes of third party dependencies, such as Impacket, dploot, lsassy, etc, I have linked the relevant PRs in those projects
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation (PR here: https://github.com/Pennyw0rth/NetExec-Wiki)
Man that's really hot ahah! Love it!!
Thanks for the PR, looks cool!
Hey there @NeffIsBack :). I added some features to the original project. Would you mind if I commit them here aswell before further checks are done?
Hey there @NeffIsBack :). I added some features to the original project. Would you mind if I commit them here aswell before further checks are done?
Absolutely! Feel free to add anything you think is useful.
Soooooo, finally done :D. Lot's of changes. I hope nothing breaks (It works on my machine :P)
Some things still need improvement. Like the Auto-Detection for BHCE/Legacy BloodHound is currently dependent on the existence of specific attributes like isTierZero for BHCE. But it works for now.
Features Added:
- Dual BloodHound format support + Auto-Detect (Legacy + BHCE)
- Rudimentary Tier 0 detection with AdminSDHolder and isTierZero flags for BHCE, SID Mapping for Legacy
- Password age analysis for DPAPI dump viability
- Output options (plain,csv,json)
- Backup Functionality to save raw XMLs
- Language-independent group membership analysis (Because I was really dumb earlier. Languages change, SIDs are eternal)
Hey thanks for the update. A few things that should be changed:
- Please use meaningful commit messages, so that it is clear why that commit was made and what it supposes to change
- I don't think we should parse raw BloodHound data. We do have a bloodhound connector which we could use to interact with the database itself. There is no way of retrieving if a user is part of T0 at the moment, but feel free to add that. The code is located in
/nxc/helpers/bloodhound.pyand so far only setting a user or host to "owned" is implemented. Related: https://github.com/Pennyw0rth/NetExec/pull/616
Hey, sorry for the convoluted commit messages. I was on the phone today earlier and just checked the Copilot output. It essentially just removed commented out docstrings that showed some weird behaviour while testing.
As for the connector: Great idea! I'll get to work on that :). But would it be possible to keep both? I actually run into scenarios quite often where the box I'm executing netexec from has no easy way of communicating directly with a bloodhound db. I think we could get the best of both worlds there. What do you think?
Hey, sorry for the convoluted commit messages. I was on the phone today earlier and just checked the Copilot output. It essentially just removed commented out docstrings that showed some weird behaviour while testing.
No worries, but would be nice for the future. I might squash merge now to not have 20+ the same commit in the history.
As for the connector: Great idea! I'll get to work on that :). But would it be possible to keep both? I actually run into scenarios quite often where the box I'm executing netexec from has no easy way of communicating directly with a bloodhound db. I think we could get the best of both worlds there. What do you think?
Perhaps, but i think this is the wrong place for it. Currently we have the live connector to the database and if we would decide to integrate an offline version of this, the code should not sit in one specific module. I think we should use existing infrastructure for now and if we decide to add an offline parser this should happen in a separate PR and somewhere accessible for the entire application.
Perhaps, but i think this is the wrong place for it. Currently we have the live connector to the database and if we would decide to integrate an offline version of this, the code should not sit in one specific module. I think we should use existing infrastructure for now and if we decide to add an offline parser this should happen in a separate PR and somewhere accessible for the entire application.
Hey there! Just a quick update. The Connector for TaskHound itself is now finished an in testing. If you want to give it a spin, just checkout the specific branch. Once that one has been battle-tested and merged to main on the primary repo I'll get to work on the netexec specific integration.
Would that work for you? :)
I think so yes. Currently there just shouldn't be a separate bh parsing logic in the PR. Querying/identifying/parsing the tasks should be the first step for the module. Any BH stuff could be added later on.
Understood. Let me strip the logic for now so the PR can go into testing and I'll open an enhancement once the OpenGraph Integration has been battle-tested.
I am currently reworking the entire module with a "barebone" approach. For convenience and the maximum value (without needing a bloodhound connector) I would like to leave a rudimentary LDAP lookup in place that does the following:
- Convert SID from SchedTask (if encountered) to samaccountname
- Group Membership Lookup via samaccountname
- Check for general "Tier 0" memberships and mark the Task accordingly
Would that be ok?
Sounds good. We should probably use the existing check_if_admin function which probably needs to be altered a bit to accept an optional user instead of using the logged in one. Just an idea, if the implementation isn't that easy we could still use most of the functions code tho.
@NeffIsBack Done :). Everything should work as intended now.