Added MDE Support
Comprehensive MDE policy validation with 46 automated organized based on custom made benchmarks, covering antivirus configurations, global settings (manual review), and policy design quality (manual review) on custom made benchmarks, covering antivirus configurations, global settings (manual review), and policy design quality (manual review).
See MDE-FEATURE-DOCUMENTATION.md for complete feature documentation.
@bdrogja thanks a lot for the PR. This is AMAZING work 👏
I took a look at the implementation and while I appreciate the effort to use a config driven flow for the tests, it makes it a little hard to maintain over the long term. Plus this format deviates a lot from the rest of the Maester tests.
With the Maester tests, we want users who run the test and notice a failure to be able to open the tests file, open the associated cmdlet and easily understand the logic of the check.
The EIDSCA implementation is the closest to what you have in this PR. One key difference is that we use a json to drive the generation of the EIDSCA tests since it's mostly checking for config.
See https://maester.dev/docs/contributing#updating-eidsca-tests-and-documentation
I feel like we can do the same here and generate the MDE tests at build time. This way the final test that runs and the markdown will follow the format of the rest of the tests.
We can re-use a lot of the existing cmdlets as is, since it will be the generated cmdlet running it.
In terms of custom config we have plans to build on https://github.com/maester365/maester/blob/main/tests/maester-config.json so users could customize the parameters they pass into each cmdlet.
We also want to build a UX in the report to be able to customize the parameters. Hence keeping the config files consistent will make it easy to add these core features.
Thoughts?
Thanks for the very fast review and feedback on this. I know, you have a lot of projects on the table, appreciate that.
I think that shouldnt be a big deal to build it the same way as the EIDSCA tests. Even though it increases the code and redundancy a lot, I agree that its still better understandable and readable for admins.
Whats your idea for the config file? Should I remove the global config file and merge it directly into each test? The filters (only Windows for now, only msSense and mdm managed, and soo on) can be configured directly in the cmdlets, as well as the Compliance Logic and Policy Filtering Feature.
Or do you have a different idea?
Thanks a lot @bdrogja that would be awesome.
I think if we can make these parameters for the cmdlets and have sensible defaults, then when we build a custom ux we can generate the UI based on the cmdlet parameters for each test.
@bdrogja thoughts...