aws-nuke icon indicating copy to clipboard operation
aws-nuke copied to clipboard

S3 Buckets is not deleting

Open kthan-EA opened this issue 3 years ago • 3 comments

Hi, I have been trying to delete all the s3 buckets in an account and trying to run the below command ./aws-nuke-v2.15.0.rc.3-windows-amd64.exe -c config.yaml --no-dry-run It lists all the buckets but actually doesnt delete any bucket. After listing all the buckets with an entry -would remove it just stops , not doing anything. There are close to 250 buckets in that account.

My config.yaml:

# cat config.yaml
regions:
  - ap-southeast-2

accounts:
    "<Account to be deleted>": {} # aws-nuke-example

resource-types:
  targets:
    - S3Object
    - S3Bucket

Just adding to the above it failed with error : runtime: VirtualAlloc of 8192 bytes failed with errno=1455 fatal error: out of memory

kthan-EA avatar Mar 22 '21 08:03 kthan-EA

Hello.

250 buckets is a lot and I assume each one also has a lot of objects. aws-nuke does not do anything special for S3 and handles it like any other resource. This means it would list all object in all buckets and delete them one by one, which easily can cause memory issues.

Since deleting a S3 bucket also leads to the deletion of all its objects, you can just skip deleting the objects with aws-nuke:

resource-types:
  targets:
  - S3Bucket
  excludes:
  - S3Object

svenwltr avatar Mar 23 '21 12:03 svenwltr

Buckets and aws-nuke can be deceiving.

I should probably raise an issue for this but the "problem" isn't really a problem it is basically the following;

  • aws-nuke reaches the deletion phase
  • aws-nuke reaches a bucket with a substantial amount of S3Objects (thousands upon thousands of objects)
  • The aws-nuke log will appear to "hang" or "stop"

The problem is aws-nuke doesn't list "in progress" deletions of buckets while it's deleting thousands if not hundreds of thousands of objects, this can be deceiving because you think it's stopped or broken when it hasn't.

👍🏻 You really should exclude S3Objects as this should reduce the memory usage significantly

Try and target a single bucket first that isn't too large to confirm your config and everything else is working as expected. However 250 buckets are going to take a substantial amount of time.

jbarnes avatar Mar 26 '21 10:03 jbarnes

The best thing to do here is to change the object lifecycle for all the s3 buckets. Make it so the objects are deleted in a few days then run nuke after that.

mrkenkeller avatar Apr 16 '21 20:04 mrkenkeller