aioaws icon indicating copy to clipboard operation
aioaws copied to clipboard

POST not allowed when uploading key

Open grillazz opened this issue 4 years ago • 5 comments

Getting below error when trying upload file.

root@b4cf3fba168e:/app# python main.py
[S3File(key='test.txt', last_modified=datetime.datetime(2021, 10, 25, 12, 5, 32, 928000, tzinfo=datetime.timezone.utc), size=282, e_tag='9e36cc3537ee9ee1e3b10fa4e761045b', storage_class='STANDARD'),]
Traceback (most recent call last):
  File "/app/main.py", line 78, in <module>
    asyncio.run(main())
  File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
  File "/app/main.py", line 76, in main
    await s3_demo(client)
  File "/app/main.py", line 18, in s3_demo
    await s3.upload('test/upload-to.txt',b'test')
  File "/usr/local/lib/python3.9/site-packages/aioaws/s3.py", line 111, in upload
    await self._aws_client.raw_post(d['url'], expected_status=204, data=d['fields'], files={'file': content})
  File "/usr/local/lib/python3.9/site-packages/aioaws/core.py", line 69, in raw_post
    raise RequestError(r)
aioaws.core.RequestError: unexpected response from POST "https://host.net/mybucket/": 403, response:
<?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Policy check failed, variable not met condition: bucket</Message><BucketName>mybucket</BucketName><RequestId>tx000000000000003fdf954-0061769dea-342945885-default</RequestId><HostId>342945885-default-d
efault</HostId></Error>

I was able to list as you can find above but not add file to bucket.

Additionally I can add files with aws cli onto this uri / bucket

grillazz avatar Oct 25 '21 12:10 grillazz

What code are you running? What version of aioaws are you using?

samuelcolvin avatar Oct 25 '21 13:10 samuelcolvin

Please find code snippet below. Im using aioaws 0.11

import asyncio

from aioaws.s3 import S3Client, S3Config
from httpx import AsyncClient


async def s3_demo(client: AsyncClient):
    s3 = S3Client(client, S3Config(
        '',
        '',
        '',
        'host.net/mybucket'
    ))
    print([f async for f in s3.list()])
    # upload a file:
    await s3.upload('test/upload-to.txt',b'test')

async def main():
    async with AsyncClient(timeout=30) as client:
        await s3_demo(client)

asyncio.run(main())

grillazz avatar Oct 25 '21 13:10 grillazz

humm, I think this might be a problem with the bucket name if it's really host.net/mybucket.

ioaws has logic to use the bucket name as the domain if it contains a dot:

https://github.com/samuelcolvin/aioaws/blob/51bb9e374ab98c8a88dad3ddd73b125197361601/aioaws/core.py#L42-L47

I think (maybe I'm wrong?) that bucket names like host.net/path can't be used as custom URLs for buckets, only bucket names like host.net or my-thing.host.net.

Solutions:

  • create a new bucket without a dot in the name
  • create a new bucket with a name which is a domain and setup the DNS records to access the bucket from that domain
  • create a PR here to make use_custom_bucket_domains an option
  • use a hack like
s3 = S3Client(client, S3Config(...))
s3._aws_client.host = f'{bucket}.s3.{s3._aws_client.region}.amazonaws.com'
...

samuelcolvin avatar Oct 25 '21 13:10 samuelcolvin

I can do s3.list() for both region.host.net/bucket or bucket.region.host.net and get:

[S3File(key='test.txt', last_modified=datetime.datetime(2021, 10, 25, 12, 5, 32, 928000, tzinfo=datetime.timezone.utc), size=282, e_tag='9e36cc3537ee9ee1e3b10fa4e761045b', storage_class='STANDARD'),]

so my assumption is that GET is working ok for aioaws in both cases so it is resolving host and bucket correctly

more to say it is on priv S3 not amazonaws.com

grillazz avatar Oct 26 '21 11:10 grillazz

Hi @samuelcolvin ,

First of all thanks for this SDK. It's really lightweight, easy to use and solves all the needs on current project.

But the issues described by @grillazz affects my use case too, because of the dot in the bucket name.

Currently I'm using a workaround provided by you in this comment and it works fine. But would be nice to have support for bucket names containing dots, since they are valid and accepted by AWS S3.

I'm willing to make a PR that fixes this issue. Just wanted to clarify what you meant by create a PR here to make use_custom_bucket_domains an option.

Should I add one more field to S3Config called use_custom_bucket_domains and implement next logic?

  bucket = get_config_attr(config, 'aws_s3_bucket')
  use_custom_bucket_domains = get_config_attr(config, 'use_custom_bucket_domains')
  if use_custom_bucket_domains and '.' in bucket:
     # assumes the bucket is a domain and is already as a CNAME record for S3 
     self.host = bucket 
 else: 
     # see https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-bucket-intro.html 
     self.host = f'{bucket}.s3.{self.region}.amazonaws.com' 

scebotari avatar Mar 16 '22 16:03 scebotari