localstack
localstack copied to clipboard
Can't delete items from S3 bucket
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
After creating a bucket in the localstack I can save objects to that bucket from my client (which is a Python backend using boto3), and can get the head of that object from my client, and list the objects in that bucket, but I cannot delete that exact same object (getting a 404 Command not found error).
From delete object I get the following error botocore.exceptions.ClientError: An error occurred (404) when calling the DeleteObject operation: Not Found
Expected Behavior
I would expect that I can also delete and list the objects in the bucket from my client
How are you starting LocalStack?
With a docker-compose file
Steps To Reproduce
How are you starting localstack (e.g., bin/localstack command, arguments, or docker-compose.yml)
Deploying the latest helm chart on EKS
Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://sample-bucket
Then from the python client using boto3 I perform the following steps: Assuming KEY is an existing key
s3_client = boto3.client("s3", endpoint_url="http://localstack:4566/", region_name="us-east-1", config=Config(signature_version=UNSIGNED))
s3_client.delete_object(Bucket='sample-data', Key=KEY)
Also failing is:
s3_client.list_buckets()
Environment
- OS: deploying it on EKS
- LocalStack: 0.14.2 (from the latest helm chart)
Anything else?
When performed on an actual S3 bucket all of the above operations work with the same client code
Welcome to LocalStack! Thanks for reporting your first issue and our team will be working towards fixing the issue for you or reach out for more background information. We recommend joining our Slack Community for real-time help and drop a message to LocalStack Pro Support if you are a Pro user! If you are willing to contribute towards fixing this issue, please have a look at our contributing guidelines and our developer guide.
Thanks for reporting @RichSchulz . Can you please share the value of KEY, as well as the steps performed to create an object under that key? Potentially, the issue could be related to the key value, e.g., if it contains any special characters, or multiple slashes in sequence (...//...). Thanks
Thanks for the reply @whummer . The key was created with the same client using s3_client.put_object(Body=file, Bucket=self.bucket_name, Key=key). The key used was a postgresql generated UUID (one example key which didn't work was 38800f86-9c2d-4a8b-9eb6-98869655b9b9). Hope that helps, let me know if you need more information
This is bizarre. I have this very same problem and it only exists in the latest release (1.14.3). My project's integration tests all pass fine with 1.14.2, but a bump to 1.14.3 results in:
botocore.exceptions.ClientError: An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete is not empty
...even after I've explicitly called .delete_object() on everything in said bucket:
for obj in client.list_objects(Bucket=name).get("Contents", []):
client.delete_object(Bucket=name, Key=obj["Key"])
This is the crazy part. If I throw out all of my code and just do a simple demo script, it works fine and I've not been able to tinker with it to make it fail as above. Here's the demo I've been playing with:
from pathlib import Path
import boto3
s3 = boto3.client("s3", endpoint_url="http://localstack:4566/")
bucket_name = "test"
s3.create_bucket(Bucket=bucket_name)
path = Path("/etc/alpine-release")
print(f"Uploading {path}")
s3.upload_file(path.as_posix(), bucket_name, path.name)
for obj in s3.list_objects(Bucket=bucket_name).get("Contents", []):
print(f"Deleting {obj['Key']}")
s3.delete_object(Bucket=bucket_name, Key=obj["Key"])
print(s3.list_objects(Bucket=bucket_name).get("Contents", []))
s3.delete_bucket(Bucket=bucket_name)
Now obviously, this isn't super-helpful. I was hoping to be able to provide an example of a failure but after a few hours of poking at this thing, I've yet to be able to (reliably) reproduce the problem with a simpler script. I'd share my whole codebase, but unfortunately it's not mine to share :-(
What I can tell you is that in my project my tests:
- Create a test bucket
- Write a file to said bucket with
.upload_file() - Verify that it was written
- Delete all files in the bucket
- Delete the bucket
On 1.14.2, this works as you'd expect. In 1.14.3 however the last step fails complaining that the bucket isn't empty despite me confirming that .delete_object() for every file in the bucket at the time. The tests are serial, so there's no parallelisation at play here either.
I'm experiencing the exact same issue as @limedaniel with version 1.1.0
same issue in 1.2.0
same issue in 1.2.1.dev
Hi @limedaniel, @peterson-dc, @edanisko and @anugrahsinghal, do you still encounter the issue with our latest version? Also, just wanted to confirm, the buckets you created are not versioned?
Hello 👋! It looks like this issue hasn’t been active in longer than two months. We encourage you to check if this is still an issue in the latest release. In the absence of more information, we will be closing this issue soon. If you find that this is still a problem, please feel free to provide a comment or upvote with a reaction on the initial post to prevent automatic closure. If the issue is already closed, please feel free to open a new one.
possible solution for this problem in this thread
as localstack requieres a / as prefix for the bucketname