ScoutSuite
ScoutSuite copied to clipboard
Resource Cache Tuning Mechanisms
** Describe the Problem**
In our ScoutSuite implementation, we're executing from within AWS Lambda execution environments. Lambdas reuse a given environment for multiple invocations, which means the memory state of ScoutSuite hangs around between runs. One thing that we've identified as a challenge has been the AWS resource caching mechanism.
https://github.com/nccgroup/ScoutSuite/blob/master/ScoutSuite/providers/aws/facade/rds.py#L31
The resource caching for AWS is keyed on region
. However, a given execution environment may run 10 or 15 accounts with a random assortment of regions. If a region overlap occurs where two accounts are executed on the same region, the resource caching does not pull in the second accounts set of resources.
Describe the solution you'd like / Describe alternatives you've considered
I'm open to different solutions. The caching could be keyed on say account or alias as well as region. Potentially pass in an arg to enable/disable caching. I'd love to hear your thoughts on the approach you would want to take - I'm willing to execute.
I brute force fixed this in our fork by just removing the if
cache checks but that's not a long-term solution to stay current.
Additional context
Add any other context or screenshots about the feature request here.
- Even though the environment (i.e. container) is reused, doesn't each execution/process run in a different memory space? I fail to see why they would be sharing memory. Or is it a python-specific issue?
- I must admit I'd not seen this "caching" mechanism before, and don't see why it's necessary. It's only used by a handful of services (EC2, ELB, ELBv2, ElastiCache, RDS, RedShift, SNS) and my guess is this is an implementation specific to one contributor.
I'd think the easiest solution is to reimplement the facades for these services and remove the use of caches? Unless I'm failing to see why they were necessary in the first place.
@j4v That's one of the unfortunate parts of Lambda - the memory space is reused. All of the class based objects persist unless specifically managed to expire at the end of an invocation. Part of their attempt to reuse the container and keep it "hot".
After I ripped out the existing caching functionality, I ran into further issues with asynchronous event handling. I'm going to check with the team, but we may just migrate our use case to Fargate Scheduled Tasks rather than try to retool these facades. Will circle back around on this. Kinda feels like we may have tried to hard to shoehorn ScoutSuite into Lambdas.
Unfortunately, we only started running into this issue consistently after adding a couple dozen AWS accounts to our stack over time.
After I ripped out the existing caching functionality, I ran into further issues with asynchronous event handling.
This is surprising considering dozens of other services (all but 7) don't make use of this "cache". If you make a PR I can have a look?