s3cmd
s3cmd copied to clipboard
InvalidAccessKeyId error when using a different S3 host
Hello, I am getting the following error S3 error: 403 (InvalidAccessKeyId): The AWS Access Key Id you provided does not exist in our records.
when using DigitalOcean Spaces.
I configured s3cmd correctly, but it seems s3cmd is ignoring the host configuration for some of the requests, as seen in when running s3cmd --debug la --recursive
[...]
DEBUG: Using ca_certs_file None [32/1981]
DEBUG: Using ssl_client_cert_file None
DEBUG: Using ssl_client_key_file None DEBUG: httplib.HTTPSConnection() has both context and check_hostname
DEBUG: non-proxied HTTPSConnection(fra1.digitaloceanspaces.com, None)
DEBUG: format_uri(): / DEBUG: Sending request method_string='GET', uri='/', headers={'x-amz-date': 'Mon, 07 Aug 2023 17:30:10 +0000', 'Authorization': 'AWS DO00KVET679D9ZMBVJJ2:NEsl/IwJZdteOOLainYstCcHWJA='}, body=(0 bytes) DEBUG: ConnMan.put(): connection put back to pool (https://fra1.digitaloceanspaces.com#1)
DEBUG: Response:
{'data': b'<?xml version="1.0" encoding="UTF-8"?>\n<ListAllMyBucketsResult x'
b'mlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><DisplayName>8' b'703708</DisplayName><ID>8703708</ID></Owner><Buckets><Bucket><Creati' b'onDate>2023-08-07T15:53:42.536Z</CreationDate><Name>ai-space</Name><' b'/Bucket></Buckets></ListAllMyBucketsResult>',
'headers': {'content-length': '311',
'content-type': 'text/xml; charset=utf-8',
'date': 'Mon, 07 Aug 2023 17:30:10 GMT',
'strict-transport-security': 'max-age=15552000; '
'includeSubDomains; preload',
'x-envoy-upstream-healthchecked-cluster': ''},
'reason': 'OK',
'status': 200} DEBUG: Bucket 's3://ai-space':
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v2
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 07 Aug 2023 17:30:10 +0000\n/ai-space/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(ai-space): ai-space.s3.amazonaws.com DEBUG: ConnMan.get(): creating new connection: https://ai-space.s3.amazonaws.com
DEBUG: httplib.HTTPSConnection() has both context and check_hostname
DEBUG: non-proxied HTTPSConnection(ai-space.s3.amazonaws.com, None) DEBUG: format_uri(): /
DEBUG: Sending request method_string='GET', uri='/', headers={'x-amz-date': 'Mon, 07 Aug 2023 17:30:10 +0000', 'Authorization': 'AWS DO00KVET679D9ZMBVJJ2:u0G0yY2Bm66nUJVDdl/wuFFnN0Y='}, body=(0 bytes)
DEBUG: ConnMan.put(): connection put back to pool (https://ai-space.s3.amazonaws.com#1)
DEBUG: Response:
{'data': b'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>InvalidAcces'
b'sKeyId</Code><Message>The AWS Access Key Id you provided does not ex'
b'ist in our records.</Message><AWSAccessKeyId>[...]</A'
b'WSAccessKeyId><RequestId>FVF544Z0XJYW8Y3H</RequestId><HostId>lHoXkER'
b'JQvUJbbkMRUsjpkwsozrFd8PBE3pbzokIky1BsFCT2kjnTE8prApLESbBfBPGFvZHu5A'
b'=</HostId></Error>',
'headers': {'content-type': 'application/xml',
'date': 'Mon, 07 Aug 2023 17:30:10 GMT',
'server': 'AmazonS3',
'transfer-encoding': 'chunked',
'x-amz-bucket-region': 'us-west-2',
'x-amz-id-2': 'lHoXkERJQvUJbbkMRUsjpkwsozrFd8PBE3pbzokIky1BsFCT2kjnTE8prApLESbBfBPGFvZHu5A=',
'x-amz-request-id': 'FVF544Z0XJYW8Y3H'},
'reason': 'Forbidden',
'status': 403}
DEBUG: S3Error: 403 (Forbidden)
DEBUG: HttpHeader: x-amz-bucket-region: us-west-2
DEBUG: HttpHeader: x-amz-request-id: FVF544Z0XJYW8Y3H
DEBUG: HttpHeader: x-amz-id-2: lHoXkERJQvUJbbkMRUsjpkwsozrFd8PBE3pbzokIky1BsFCT2kjnTE8prApLESbBfBPGFvZHu5A=
DEBUG: HttpHeader: content-type: application/xml
DEBUG: HttpHeader: transfer-encoding: chunked
DEBUG: HttpHeader: date: Mon, 07 Aug 2023 17:30:10 GMT
DEBUG: HttpHeader: server: AmazonS3
DEBUG: ErrorXML: Code: 'InvalidAccessKeyId'
DEBUG: ErrorXML: Message: 'The AWS Access Key Id you provided does not exist in our records.'
DEBUG: ErrorXML: AWSAccessKeyId: '[...]'
DEBUG: ErrorXML: RequestId: 'FVF544Z0XJYW8Y3H'
DEBUG: ErrorXML: HostId: 'lHoXkERJQvUJbbkMRUsjpkwsozrFd8PBE3pbzokIky1BsFCT2kjnTE8prApLESbBfBPGFvZHu5A='
ERROR: S3 error: 403 (InvalidAccessKeyId): The AWS Access Key Id you provided does not exist in our records.
Same error here but I'm using Linode Object Storage. Previously configuring 'website_endpoint' with '%(bucket)s' and a hard coded region was working for me.
DEBUG: get_hostname(ai-space): ai-space.s3.amazonaws.com
It's still AWS S3. Try this:
$ s3cmd --host="${S3_HOSTNAME}" --host-bucket='%(bucket)s.'"${S3_HOSTNAME}" <args>
While the S3_HOSTNAME
variable is "<region>.digitaloceanspaces.com". See Setting Up s3cmd 2.x with DigitalOcean Spaces :: DigitalOcean Documentation for details.
Linode's docs about configuring s3cmd: Using S3cmd with Object Storage | Linode Docs
I found solution, when you configure s3cmd with s3cmd --configure
you need to specify DNS-style for accessing bucket. Like this:
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars c
an be used if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket []: %(bucket)s.nyc3.digitaloceanspaces.com