aws-sdk-js
aws-sdk-js copied to clipboard
amazon s3 Bucket: Inaccessible host: "xxxxxxxxxxx"at port `undefined'. This service may not be available in the `us-west-1' region.
I have an issue similar to the one mentioned in #2618.
I had successfully loaded about 11 images to my amazon S3 bucket (which I created on the same day as the issues began).
Afterward, I tried to upload the 12th image and got the error:
Inaccessible host: "xxxxxxxxxxx"at port `undefined'. This service may not be available in the `us-west-1' region.
I waited for an hour or so, and then tried making the call again, which worked.
Why this behavior?
Can I do anything to make sure this call is stable from now?
Hey @doverradio apologies for late reply can you please share your code and some of the details about your environment. https://github.com/aws/aws-sdk-js/issues/new?assignees=&labels=bug%2Cneeds-triage&template=bug-report.yml&title=%28short+issue+description%29
Hello,
Here is my code:
// express.js controller...
exports.imageUrlToAmazonS3Url = async ( req, res ) =>
{
let { url } = req.body;
console.log(url);
AmazonS3.findOne({ Original: url }, function ( error, prior_amazonS3_doc ) {
if ( error || !prior_amazonS3_doc )
{
request({ url, encoding: null }, (err, resp, buffer) => {
// Use buffer
getAwsUrl()
.then(response => {
console.log('Here is the response: ', response)
// Setting up S3 upload parameters
console.log(`response.key: ${response.key}`);
const params = {
ContentType: 'jpeg',
Bucket: 'myimages',
Key: response.key, // File name you want to save as in S3
Body: buffer
};
s3.upload(params, function(err, data) {
if(err) {
console.log(`err`, err);
res.json(err);
} else {
console.log(`File uploaded successfully. ${data.Location}`);
console.log(`data: `, data);
data.Original = url
let new_amazon_s3_doc = new AmazonS3( data )
new_amazon_s3_doc.save()
.then( amazon_s3_doc => {
let { Location } = amazon_s3_doc
res.json( { url: Location } )
})
.catch( e => log( `new_amazon_s3_doc e: `, e ) )
}
})
})
});
} else {
let { Location } = prior_amazonS3_doc
res.json( { url: Location } )
}
})
}
Did you get to the bottom of this? I have a super simple example and I get the same error:
const AWS = require('aws-sdk')
const test = async () => {
const s3 = new AWS.S3({
accessKeyId: 'xxx',
secretAccessKey: 'xxx',
region: 'eu-west-2'
})
const params = {
Bucket: 'pawpaddock-assets',
Key: 'test.txt',
Body: 'test'
}
console.log(params)
await s3.putObject(params).promise()
console.log('saved!')
}
test()
{ Bucket: 'pawpaddock-assets', Key: 'test.txt', Body: 'test' }
/Users/alex/Projects/paw-paddock/node_modules/aws-sdk/lib/event_listeners.js:547
this.response.error = AWS.util.error(new Error(message), {
^
UnknownEndpoint: Inaccessible host: `pawpaddock-assets.s3.eu-west-2.amazonaws.com' at port `undefined'. This service may not be available in the `eu-west-2' region.
That bucket 100% exists, here is a file from it: https://pawpaddock-assets.s3.eu-west-2.amazonaws.com/images/email/brand-logo.png
Never got to the bottom of this.
It sucks because to convert an array of image urls, it takes hours sometimes.
The reason is because it will certainly work on the initial call. Then it snaps right back into that error.
For me, this was happening when accessing local s3 server. I got around it by setting s3ForcePathStyle
to true.
const s3 = new AWS.S3({
endpoint: "http://localhost:9090",
...
s3ForcePathStyle: true
});
@nakashkumar The s3ForcePathStyle: true
is the key to this working properly and recognizing the port!
I had the same problem when my server was on ipv6 and bucket endpoint(s) were ipv4.
If you use dualstack setting something like this, when initialising the SDK you should be good;
const s3 = new AWS.S3({
useDualstackEndpoint: true,
...
});
I guess you could also just use this setting in AWS.config.update
.
I randomly run into this error a few times a week. I have a lambda function which takes a snapshot of a webpage and uploads the snapshot to s3.
I haven't been able to see any patterns for when the error occurs. At times the lambdas could be uploading 300 or so images to the bucket around the same time. Maybe that has something to do with it?
I'm thinking I just need to add some retry logic to the upload. I can see the lambdas where I get the error have successful uploads 20 seconds later.
i got the same issue on mediastore
Please make sure that the "Region" and "Bucket Name" is correct.