s3cmd
s3cmd copied to clipboard
"Cannot create buckets using a POST" when trying to delete folder (Google Cloud Storage)
I have s3cmd to talk to GCS through the interop feature. I can use the s3cmd sync command no problem, but this command
s3cmd -rv del s3://backup/dailys/
fails with
ERROR: S3 error: 400 (InvalidArgument): Invalid argument.
If I debug it, I see the actual XML response that comes back is:
<?xml version='1.0' encoding='UTF-8'?>
<Error>
<Code>InvalidArgument</Code>
<Message>Invalid argument.</Message>
<Details>Cannot create buckets using a POST.</Details>
</Error>
Doesn't make sense since I'm not trying to create a bucket. Is this a bug or operator error?
This is the request in the debug:
DEBUG: Sending request method_string='POST', uri=u'/backup/?delete', headers={'x-amz-content-sha256': u'd2b[snip]', 'content-type': 'application/xml', 'Authorization': u'AWS G[snip]io=', 'x-amz-date': 'Wed, 08 Dec 2021 05:12:10 +0000', 'content-md5': u'XyyD1qymyPWl+FcQ7W3YwA=='}, body=(291 bytes)
The access account is a storage object admin so should have permission to delete.
Running Version 2.2.0
One thing I noticed is the request seems to be missing the path information. i.e. Earlier in the debug info for the same command, it GETs the bucket listing (which does return the bucket listing).
DEBUG: Sending request method_string='GET', uri=u'/backup/?prefix=dailys%2F', headers....
But the next request in the debug where it tries to delete, it shows "delete" as the path, no mention of the actual path /dailys
uri=u'/backup/?delete'
I'm guessing that is making S3 think the I'm trying to delete the whole bucket instead of a specific path in the bucket(?)
And one last thing. Just a difference I noticed between the del command when running on AWS or GCS: I don't understand how it knows what to delete as neither request seems to pass the path dailys/Wednesday
AWS:
s3cmd del -v -d --recursive s3://bucket-backup/dailys/Wednesday/
DEBUG: Canonical Request:
POST
/
delete=
...
host:bucket-backup.s3.amazonaws.com
...
DEBUG: get_hostname(bucket-backup): bucket-backup.s3.amazonaws.com
DEBUG: ConnMan.get(): re-using connection: https://bucket-backup.s3.amazonaws.com#1
DEBUG: format_uri(): /?delete
DEBUG: Sending request method_string='POST', uri=u'/?delete',.....
...
INFO: Deleted 4 objects (38537719) from s3://bucket-backup/dailys/Wednesday/
GCS:
s3cmd del -d --recursive s3://bucket-backup/dailys/Tuesday/
DEBUG: Canonical Request:
POST
/bucket-backup/
delete=
...
host:storage.googleapis.com
...
DEBUG: Processing request, please wait...
DEBUG: get_hostname(bucket-backup): storage.googleapis.com
DEBUG: ConnMan.get(): re-using connection: https://storage.googleapis.com#2
DEBUG: format_uri(): /bucket-backup/?delete
DEBUG: Sending request method_string='POST', uri=u'/bucket-backup/?delete',
...
DEBUG: ErrorXML: Code: 'InvalidArgument'
DEBUG: ErrorXML: Message: 'Invalid argument.'
DEBUG: ErrorXML: Details: 'Cannot create buckets using a POST.'
ERROR: S3 error: 400 (InvalidArgument): Invalid argument.
any resolution on this issue ? am also facing the same
Nope. My solution was to move to rclone. Works great for both GCS and AWS.
Any update on this bug? we are facing the same error with s3 golang lib against GCP bucket
I'm investigating a similar issue in a different software, so I thought I would drop a comment here with my findings:
S3-cmd probaby uses DeleteObjects to delete objects in batch when trying to delete a "folder" (group of key with a shared prefix). This call is not supported by GCS (see last line of the table in GCS compatibility doc). The workaround would be to issue many DeleteObject calls when interacting with GCS, possibly configured via an environment variable or some bucket-related configuration.