cloud_enum icon indicating copy to clipboard operation
cloud_enum copied to clipboard

/foorbar the URL of S3 to make it properly detect

Open nrathaus opened this issue 1 year ago • 11 comments
trafficstars

Currently S3 detection is not working due to missing path in the URL

This patch adds a fake path so that S3 detection works

nrathaus avatar Jun 06 '24 09:06 nrathaus

Thanks for this @nrathaus!

Must be new behavior, I wonder when that was implemented.

Does your fix still support the bucket listings when an open bucket is found?

initstring avatar Jun 06 '24 10:06 initstring

@initstring - it seems to be a rolling change - it now doesn't work when you try /foobar, it worked an hour ago

I don't know what is going on...

I can't find at the moment a way to detect S3 :(

nrathaus avatar Jun 06 '24 10:06 nrathaus

I think there is some sort of rate limit / blocking - I have switch on and off the VPN and now it seems that S3 detection with the /foobar in place works (without doesn't)

When you hit the rate limit, everything returns non-existing - even completely valid URLs

nrathaus avatar Jun 06 '24 10:06 nrathaus

Thanks for your work to troubleshoot this, @nrathaus!

If you (or anyone else reading) find a solution, please check back! Unfortunately, I probably won't have time to troubleshoot this myself soon. Sorry about that, things are just pretty busy at work/home right now.

initstring avatar Jun 06 '24 11:06 initstring

I think there is some sort of rate limit / blocking - I have switch on and off the VPN and now it seems that S3 detection with the /foobar in place works (without doesn't)

When you hit the rate limit, everything returns non-existing - even completely valid URLs

Can we put increased timer on the check for AWS buckets, to bypass the rate limiting, do we know which rate limits AWS starts blocking the checks?

Zoudo avatar Jun 06 '24 19:06 Zoudo

@initstring , do we have the changes from @nrathaus merged into the main yet?

Zoudo avatar Jun 06 '24 19:06 Zoudo

@Zoudo The fix isn't 100% accurate, it works sometimes - as there is some sort of rate limit - once you hit it, everything will return NoBucket - even valid buckets

The fix I believe is best at the moment, is to do False Positive and False Negative testing every few requests, but that would require some sort of valid S3 to be used - not sure if this is legal to do - i.e. hardcoding a well known S3 bucket into the code

nrathaus avatar Jun 07 '24 07:06 nrathaus

I did some investigation with my AWS setup, from what I see, when public access has been completely blocked - NoSuchBucket will always return

nrathaus avatar Jun 07 '24 11:06 nrathaus

I think current design implementation of S3, prevent detection of unknown S3 via keywords - at least this what I think

nrathaus avatar Jun 07 '24 11:06 nrathaus

I think current design implementation of S3, prevent detection of unknown S3 via keywords - at least this what I think

Thanks @nrathaus , does this mean if the result is empty, it means there are no buckets with public access. It means they are protected.

i wonder if this changes if we authenticate before the scans with key words.

Zoudo avatar Jun 11 '24 16:06 Zoudo

At the moment even valid s3 return as non existing when you hit the rate - which appears to me to happen within 2-3 requests to none existing buckets with no paths

And a bit later to existing buckets with an invalid path

The only way to know this happened is to hold a valid s3 bucket and path at hand and see when it stops working

As it stands at the moment I think this feature is no longer feasible unless something changes or someone finds a new way

nrathaus avatar Jun 12 '24 08:06 nrathaus