arc
arc copied to clipboard
s3 host is incorrect for other regions
Hello. Thanks for you app.
I have faced an issue with the S3
storage. I have registered a s3-bucket in eu-west-1
region. And have these lines in configuration:
config :arc,
bucket: "bucketname"
config :ex_aws,
access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY"),
s3: [
scheme: "https://",
host: "s3-eu-west-1.amazonaws.com",
region: "eu-west-1"
]
But, when trying to build the url it gives back: https://s3.amazonaws.com/bucketname/filename.jpg
. So this causes an error, when trying to access this path:
<Error>
<Code>PermanentRedirect</Code>
<Message>
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
</Message>
<Bucket>bucketname</Bucket>
<Endpoint>bucketname.s3.amazonaws.com</Endpoint>
<RequestId>some-request-id</RequestId>
<HostId>
some-host-id
</HostId>
</Error>
The reason is these lines in s3.ex
:
defp default_host do
case virtual_host do
true -> "https://#{bucket}.s3.amazonaws.com"
_ -> "https://s3.amazonaws.com/#{bucket}"
end
end
So I have changed the configuration to be:
config :arc,
asset_host: "https://s3-eu-west-1.amazonaws.com/bucketname"
It solved the issue. Also adding virtual_host: true
solves it too. But, maybe it is possible to reuse the code from ex_aws
? Since it already has all the configurations needed. If so, I can make a PR for this.
thanks! if ex_aws solves the domain issue let's definitely use their implementation. When I built arc, ex_aws couldnt generate the proper host names adequately.
That's how it's done in ex_aws
: https://github.com/CargoSense/ex_aws/blob/9e02e72477680dc6f57838b3427c0dc97431e076/lib/ex_aws/s3.ex#L895
But this method is private for some reason.
@sobolevn Thanks! This really helped me work through my problem 👍
So what's the status of this issue?
@alex88 I have tried to resolve this issue with the code from ex_aws
, but as I said the needed function was private. So I gave up and used a hack with :asset_host
.
@sobolevn did you ask if ex_aws could make the functionality public?
ping @benwilson512
@CrowdHailer nope, did not do that.
Also adding virtual_host: true solves it too.
If adding virtual_host: true
within Arc solves this issue - how would exposing that same method from ExAws make this easier? You'll need to set that property either way
Can someone elaborate on the root problem here? Why does arc
need to reproduce the logic necessary to determine the endpoint URL for S3?
virtual_host: true
would not solve it based on the logic I'm seeing above if you're outside the us-east-1
region.
Root problem:
Arc needs to generate URLs for files stored within S3 (both signed and unsigned urls). Not all regions have the same URL structure (some require virtual host: true
).
This has been a source of confusion for many. As it's not clear under which circumstances virtual_host: true
is required. Seems to be mostly trial and error for different regions.
This is unfortunately a function of the obnoxious rules that AWS has in and around the AWS S3 API. Basically it's just s3-$region.amazonaws.com
unless the region is us-east-1
, and then it's s3.amazonaws.com
.
I suppose I don't expose any functionality for having an unsigned URL, maybe that's the primary thing that needs to be added.
It actually looks like the presigned_url
logic is out of date too, since it doesn't appear to be using the same logic that all the actual requests use. Let me look into this today.
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-environment
In OS X or linux you would put those export
lines in your ~/.bash_profile
or ~/.bashrc
. I'm not sure what you do on windows.
With virtual_host: true
the uploads doesn't work for us-west-1 buckets, I get The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint
For some reason, with
config :arc,
storage: Arc.Storage.S3,
bucket: System.get_env("AWS_S3_BUCKET") || "project-development",
virtual_host: true
locally in development works fine, in production, without any specific arc config in prod.exs
, just with another bucket in the same region, I get that error
Could be that virtual_host: true
works just on url generation and not on put operations? Anyway, it's strange that with the development bucket it works and the production one it doesn't, they're in the same region
Nvm, issue was that locally I had my region set in .aws/config
, remotely in production I didn't
virtual_host is only used by the presign_url function.
The "virtual host" behavior is needed to access the url of a file, regardless of whether it is signed or not.
If you just want to upload a public file to S3, in order to access it, the host & path must be constructed appropriately
virtual_host: true
merely places the bucket in the domain name. This is never necessary. It can always go: http://s3.$aws_host.com/bucket_name/object/path if in us-east-1 or http://s3-$region.$aws_host.com/bucket_name/object/path if not in us-east-1
Interesting. Thanks @benwilson512!
I'm not sure why I thought in some cases it was necessary. If what you say is true then we likely shouldn't be using virtual_host at all here... I'll look into it.
@sobolevn - Did you solve the issue with EU Region? Did you fork arc and changed it? What you you mean with hack :asset_host? Thank you!
@tierralibre
Fix:
config :arc,
asset_host: "https://s3-eu-west-1.amazonaws.com/bucketname"
using us-west-2
i get the error when trying to generate a url (uploading works fine).
generated url is https://s3.amazonaws.com/mixdown-dev/uploads/...
Error
<Error>
<Code>PermanentRedirect</Code>
<Message>
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
</Message>
<Bucket>mixdown-dev</Bucket>
<Endpoint>mixdown-dev.s3.amazonaws.com</Endpoint>
...
</Error>
current config:
config :arc,
storage: Arc.Storage.S3,
bucket: "mixdown-dev"
config :ex_aws,
access_key_id: "***",
secret_access_key: "***",
region: "us-west-2",
debug_requests: true
What worked:
Fixed with adding either of these two work-around arc configs:
config :arc,
virtual_host: true
or
config :arc,
asset_host: "https://mixdown-dev.s3.amazonaws.com"
Same with eu-west-2
current working config:
config :arc,
asset_host: "https://s3-eu-west-2.amazonaws.com/my-bucket",
storage: Arc.Storage.S3,
bucket: "my-bucket"
config :ex_aws,
access_key_id: "*",
secret_access_key: "*",
region: "eu-west-2"
I confirm the above configuration works, but only in this order. Took me some hours to find out.
I would avoid hard coding your access key id or secret key, they both can be picked up from your environment variables. I'm not sure I understand why this is confusing. If you want to use an AWS service you have to specify the region, this is true for every AWS client.
@benwilson512 thanks for your reply. The issue here is that unless I set the configuration this way and in this exact same order, the region is ignored.
@Awea I still cant get this to work with that configuration
config :arc,
asset_host: "https://s3-us-east-2.amazonaws.com/gw-dev-admin",
storage: Arc.Storage.S3,
bucket: "gw-dev-admin"
config :ex_aws,
access_key_id: "*",
secret_access_key: "*",
regsion: "us-east-2"
Reults in
** (CaseClauseError) no case clause matching: {:ok, %{body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>PermanentRedirect</Code><Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message><Bucket>gw-dev-admin</Bucket><Endpoint>gw-dev-admin.s3.amazonaws.com</Endpoint><RequestId>*</RequestId><HostId>*</HostId></Error>", headers: [{"x-amz-bucket-region", "us-east-2"}, {"x-amz-request-id", "*"}, {"x-amz-id-2", "*"}, {"Content-Type", "application/xml"}, {"Transfer-Encoding", "chunked"}, {"Date", "Sun, 03 Sep 2017 12:35:34 GMT"}, {"Server", "AmazonS3"}], status_code: 301}}
ok sorted out my issues with
config :ex_aws, :s3,
region: "us-east-2"
``` as a config instead. Also had to add `sweet_xml` for ExAws to work