aws-sdk-js
aws-sdk-js copied to clipboard
Changing Default Providers still results in filesystem reads
- [X] I've gone through Developer Guide and API reference
- [X] I've checked AWS Forums and StackOverflow for answers
- [X] I've searched for previous similar issues and didn't find any solution
- [X] This is an issue with version 2.x of the SDK
Describe the bug We're running an application on a server without any AWS configuration files on disk. The application may upload millions of files to S3 on a busy day. While looking through an APM tool we noticed many synchronous disk reads prior to a file upload operation. We looked through the AWS docs and then made the following change:
AWS.CredentialProviderChain.defaultProviders = [
new AWS.EnvironmentCredentials('AWS'),
new AWS.ECSCredentials(),
new AWS.EC2MetadataCredentials()
];
However, the aws-sdk still makes multiple filesystem lookups every time a file is uploaded to S3.
Is the issue in the browser/Node.js? Node.js
If on Node.js, are you running this on AWS Lambda? No
Details of the browser/Node.js version Node.js v14.17.0
SDK version number 2.968.0
To Reproduce (observed behavior) I suspect this happens for everybody that uploads a file to S3.
Expected behavior Having reduced the credentials provider chain I would expect there to be zero reads for configuration files on disk.
Even if there were credential files on disk, I would only expect them to be read a single time when the module is instantiated.
Screenshots Here's a screenshot from our APM tool:
Four files are uploaded to S3 at this point, hence the four related rows. For each row, two filesystem lookups are made:
ENOENT: no such file or directory, open '/root/.aws/credentials'
ENOENT: no such file or directory, open '/root/.aws/config'
Both of these calls use readFileSync()
which, when executed many times, can start to slow a process down.
Additional context I followed the recommendation on this page here: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CredentialProviderChain.html
liked and subscribed
hi @ajredniwja, hope you're doing well :) any update on a fix for this issue?
I am facing this problem as well. Can someone please address this issue as it causes massive lag in the node.js event loop.
We are experiencing the same issue. We're passing in our AWS access key id and secret access key via a call to config.update(). We do not have the credential files on disk. Yet, each call to upload a file to S3 seems to trigger an attempt to read /root/.aws/credentials and /root/.aws/config. Since neither file exists, this results in two ENOENT errors being logged - every time we upload a file. It seems unnecessary to attempt to read the credentials and config files if we've passed the credentials in explicitly from our app.
@ajredniwja I hope you're doing well. Is there any sort of update on this issue? seems like multiple people are reporting it and I'm sure others are also experiencing it.
@ajredniwja if we just opened a PR to fix this, how long would it take to merge?
Hi there. Any plans to fix it? ETA? Should we fork and temporary fix the stuff until the official fix will be released?
Still an issue for us as well.
if there is an AWS_SDK_LOAD_CONFIG
it should be set to ""
that is what solved the issue for me.
Read more here: https://github.com/aws/aws-sdk-js/issues/4043
For those interested you can resolve this issue I believe by either setting the endpointDiscoveryEnabled value in the service config to false or via the environment variable AWS_ENABLE_ENDPOINT_DISCOVERY. It will search recursively for any value (true or false) until it checks the config file. You can read more about it here: https://github.com/aws/aws-sdk-js/issues/3995#issuecomment-1106830467
Apologies this issue fell out of queue, I can review/post a PR for the fix if its still an issue with the latest version of the SDK.
if there is an
AWS_SDK_LOAD_CONFIG
it should be set to""
that is what solved the issue for me.Read more here: #4043
Worked for me.Thanks for sharing this saved me a lot of time.
I'm not actually sure why this is the current behavior of the SDK but I can show where this is the current behavior of the SDK. I ran a test file that looks like this:
process.env.AWS_STS_REGIONAL_ENDPOINTS = "regional";
process.env.AWS_ENABLE_ENDPOINT_DISCOVERY = "false";
delete process.env["AWS_SDK_LOAD_CONFIG"];
const oldFs = require("fs").readFileSync;
const newFs = (path, options) => {
console.log(path, options, "readFileSync");
return oldFs(path, options);
};
require("fs").readFileSync = newFs;
const s3 = new AWS.S3({
});
const fn = async () => {
await s3
.putObject({
Bucket: "FAKE_BUCKET",
Key: "FAKE_KEY",
Body: "FAKE_BODY",
})
.promise();
};
fn().catch((err) => {
console.log(err);
process.exit(1);
});
I ran that file with a debugger. One thing I noticed instantly is that the fs
call to ./aws/credentials
does not occur until the .promise()
function runs. In other words, you would not see this problem if you used the SDK with callbacks.
This is what my call stack looked like when I discovered the issue:
You can read the source of that final function resolveRegionalEndpointsFlag
here. You can see that it calls AWS.util.getProfilesFromSharedConfig
(source here) which will call iniLoader.loadFrom
which will eventually make a readFileSync call.
Note that the iniLoader.loadFrom
call in getProfilesFromSharedConfig
is gated behind an if statement that checks if process.env[util.configOptInEnv]
is true. util.configOptInEnv
is defined here as AWS_SDK_LOAD_CONFIG
. That's why this fix from @mirelaekic worked.
Jumping back to resolveRegionalEndpointsFlag
, there are a couple if statements that will prevent it from ever calling getProfilesFromSharedConfig
. In other words, there are things other than AWS_SDK_LOAD_CONFIG
that can prevent the aws-sdk from trying to read the ./aws/credentials
file.
- It will check if
originalConfig[options.clientConfig]
is true. In my debug stepthroughoptions.clientConfig
is set tos3UsEast1RegionalEndpoint
which you can read about here. This is the options you passed intoAWS.S3
's constructor. - You can do the same thing with
Object.prototype.hasOwnProperty.call(process.env, options.env))
which checks for the environment variableAWS_S3_US_EAST_1_REGIONAL_ENDPOINT
in your environment. This has similar behavior
Note the default for those switches is legacy
which implies the other option, regional
is the "correct" option. Based on that, my guess is that following 1 or 2 is a more correct way than setting AWS_SDK_LOAD_CONFIG
.
I don't have enough familiarity with the aws-sdk to even know if this is a bug or a side effect of something necessary. I have also not checked if this behavior persists to v3 of the sdk. It would be nice to have this documented somewhere if it won't be fixed because it's a pretty massive footgun.
Having the same issue here. The APM observed many error calls to try to get the ./aws/credentials and ./aws/config files.
In addition to that, the latest/api/token call blocks the S3.GetObject. Anyone knows what's the issue here?
The only thing that worked for me was upgrading the sdk to v3.