aws-nuke icon indicating copy to clipboard operation
aws-nuke copied to clipboard

Bucket with 16M items, stuck - running in k8s, container dies

Open gfrid opened this issue 2 years ago • 5 comments

Running aws-nuke in K8s container in jenkins/slave-jnlp, container dies in idle after several minutes the nuke is stuck on huge bucket sizes.

is there any workaround to this except skipping? how does aws-nuke with s3, does it build a table on what to delete, maybe its a problem with container storage size or compute power?

gfrid avatar Apr 08 '22 13:04 gfrid

Unless you need to prune specific s3 objects, simply exclude the s3 object type and just have it cleanup s3 buckets in their entirety.

resource-types:
  excludes:
    - S3Object

ekristen avatar Apr 26 '22 15:04 ekristen

yes, that what we did and we also wrote small boto3 to handle it https://github.com/gfrid/expire-large-aws-s3-buckets/blob/main/boto3/expire_s3_bucket.py

gfrid avatar Apr 27 '22 11:04 gfrid

I'm not sure I understand. What is your goal? To delete the bucket entirely or to empty the bucket and but keep the bucket?

ekristen avatar Apr 27 '22 13:04 ekristen

I'm not sure I understand. What is your goal? To delete the bucket entirely or to empty the bucket and but keep the bucket?

my goal is to empty the bucket and remove it, since programmatically deleting 16M items will take weeks (calculated) everyone advice to set expire policy. My question if the owner of this app can program such logic else the case is closed.

gfrid avatar Apr 28 '22 03:04 gfrid

@gfrid if you add the exclude to your config I recommended then aws nuke won't iterate the objects it'll just nuke the entire bucket and you'll be done.

ekristen avatar Apr 28 '22 03:04 ekristen

@ekristen Can this be added in the README?

The --exclude flag prevent nuking of the specified resource types. ,is confusing

I would never thought that by adding something there, it would still be deleted. I know, corner case, but...

alexandrosgkesos avatar Jan 31 '23 15:01 alexandrosgkesos