aws-nuke
aws-nuke copied to clipboard
Bucket with 16M items, stuck - running in k8s, container dies
Running aws-nuke in K8s container in jenkins/slave-jnlp, container dies in idle after several minutes the nuke is stuck on huge bucket sizes.
is there any workaround to this except skipping? how does aws-nuke with s3, does it build a table on what to delete, maybe its a problem with container storage size or compute power?
Unless you need to prune specific s3 objects, simply exclude the s3 object type and just have it cleanup s3 buckets in their entirety.
resource-types:
excludes:
- S3Object
yes, that what we did and we also wrote small boto3 to handle it https://github.com/gfrid/expire-large-aws-s3-buckets/blob/main/boto3/expire_s3_bucket.py
I'm not sure I understand. What is your goal? To delete the bucket entirely or to empty the bucket and but keep the bucket?
I'm not sure I understand. What is your goal? To delete the bucket entirely or to empty the bucket and but keep the bucket?
my goal is to empty the bucket and remove it, since programmatically deleting 16M items will take weeks (calculated) everyone advice to set expire policy. My question if the owner of this app can program such logic else the case is closed.
@gfrid if you add the exclude to your config I recommended then aws nuke won't iterate the objects it'll just nuke the entire bucket and you'll be done.
@ekristen Can this be added in the README?
The --exclude flag prevent nuking of the specified resource types.
,is confusing
I would never thought that by adding something there, it would still be deleted. I know, corner case, but...