Confirm by changing [ ] to [x] below to ensure that it's a bug:
Describe the bug
Is the issue in the browser/Node.js?
Node.js
If on Node.js, are you running this on AWS Lambda?
No
Details of the Node.js version
v10.7
SDK version number
v2.585.0
To Reproduce (observed behavior)
Today, I updated aws-sdk on my project from version 2.555.0 to the latest version 2.585.0. Then, I figured out that the function: cognito.getOpenIdTokenForDeveloperIdentity run longer than expected.
200-300ms to get the result4-5s to get the resultThe following is the code snippet that I use and the the application is running on a EC2 machine.
const Aws = require('aws-sdk');
async function getToken(username) {
const cognito = new Aws.CognitoIdentity({ region: process.env.AWS_REGION });
const Logins = {
'my-logins': username
};
const result = await cognito.getOpenIdTokenForDeveloperIdentity({
IdentityPoolId: process.env.IDENTITY_POOL_ID,
Logins,
TokenDuration: 20,
}).promise();
return result.Token;
}
My buddy find out that the problem occurred since version 2.575.0
Hey @khacminh, Thank-you for reaching out to us with your issue.
Can you please provide the logs, to do that you can do something like:
To do you can do something like following:
var cognito = new AWS.CognitoIdentity({logger: console});
hi @ajredniwja, here is the log that I got
[AWS cognitoidentity 200 5.272 s 0 retries] getOpenIdTokenForDeveloperIdentity({
IdentityPoolId: 'my-pool-id',
Logins: {
'my-logins': 'xxxxxx'
},
TokenDuration: 20
})
@khacminh, do you get the same output when you run it locally?
@ajredniwja it's only takes over 1 second when I run it from my PC (with Access Key ID and Secret Access Key provided). On the EC2 machine, I attached an IAM role to the machine instead of providing the access keys.
Hey @khacminh I was not able to reproduce this, is there any additional context you can provide?
Hi @ajredniwja ,
I created a new EC2 machine running Ubuntu 18.04 and run the script with node.js 10.7 and as you said, there is no problem with it. It took only 100ms to get the OpenId Token.
Then, I installed docker and run the script inside a container with the image base is mhart/alpine-node:10.7 and it took about 5s to run the script.
@ajredniwja
I tried with the base image is the official node:12.16.0-alpine3.11 and met the same problem
Hi @ajredniwja
Any update for this issue? Today, I meet the same issue with CognitoIdentityServiceProvider.adminGetUser()
Hey @khacminh, I believe the SDK didn't made any major updates, this would be how EC2 is handling things, reached out to the service team, will update you once I hear back from the service team.
For record keeping, possibly same problem for #3223 .
Hey @khacminh and @wdittmer-mp
Can you collect some logs using something like
NODE_DEBUG=cluster,net,http,fs,tls,module,timers node app.js
Hi, any news on this?
Hey @wdittmer-mp sorry for late response, can you try supplying the credentials explicitly so that you don't touch any metadata server, this might not be related to the SDK but how the communication is done with the instance metadata server. Also can you confirm if it is still the case with the latest version of the SDK?
I can try out the latest version of the SDK yes:
With v2.686.0 it is still slow: AWS s3 200 4.294s 0 retries] putObject({
686.txt
I am not sure what you mean with supplying the credentials explicitly? I use the following:
const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID;
const awsSecretAccessKey = process.env.AWS_SECRET_ACCESS_KEY;
const awsRegion = process.env.AWS_DEFAULT_REGION || 'eu-west-1';
const awsBucketName = 'YOUR_BUCKET_NAME';
const s3Options = {
accessKeyId: awsAccessKeyId,
secretAccessKey: awsSecretAccessKey,
region: awsRegion,
s3ForcePathStyle: true,
};
I don't know how to be more explicit 馃槄 .
@ajredniwja
Here is my testing environment:
FROM node:12.16.0-alpine3.11
RUN mkdir -p /opt/app
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
WORKDIR /opt
COPY package.json package-lock.json ./
RUN npm install --only=prod && npm cache clean --force
WORKDIR /opt/app
COPY ./index.js /opt/app
ENV NODE_DEBUG=cluster,net,http,fs,tls,module,timers
CMD [ "node", "index.js" ]
const Aws = require('aws-sdk');
async function getToken(username) {
const cognito = new Aws.CognitoIdentity({ region: process.env.AWS_REGION, logger: console });
const Logins = {
'my-logins': username
};
const result = await cognito.getOpenIdTokenForDeveloperIdentity({
IdentityPoolId: process.env.IDENTITY_POOL_ID,
Logins,
TokenDuration: 20,
}).promise();
return result.Token;
}
console.log('================= Start ====================');
getToken('test-username').then(() => {
console.log('================= Done ====================');
process.exit(0);
});
Comparing 2.574 and 2.575 logs, the following maybe the one that make the difference

Hi @ajredniwja, can you share if investigation is still in progress, something has been found or if you require additional info?
Hi @ajredniwja, can you share if investigation is still in progress, something has been found or if you require additional info?
Apologies, I lost track for this issue, I was not able to find any core root cause for it, reaching out to the service team for help, will update you once I hear back from them.
Hi @ajredniwja, can you share if investigation is still in progress, something has been found or if you require additional info?
Apologies, I lost track for this issue, I was not able to find any core root cause for it, reaching out to the service team for help, will update you once I hear back from them.
Hi, is there any news on this?
Hi @ajredniwja, can you share if investigation is still in progress, something has been found or if you require additional info?
Apologies, I lost track for this issue, I was not able to find any core root cause for it, reaching out to the service team for help, will update you once I hear back from them.
Hi, is there any news on this?
I reached out to the service team, awaited reply from them as the SDK doesn't do anything significantly differently between those versions that could affect this.
Hey @khacminh, @wdittmer-mp, @azimiester
Can you guys try to add the following to your code:
AWS.MetadataService.disableFetchToken = true
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/MetadataService.html#disableFetchToken-property
If that doesn't work can you please provide impacted instance id and the time period?
This issue has not received a response in 1 week. If you still think there is a problem, please leave a comment to avoid the issue from automatically closing.
Hi @ajredniwja ,
I tried with your suggestion but with the same result
const Aws = require('aws-sdk');
Aws.MetadataService.disableFetchToken = true;
async function getToken(username) {
const cognito = new Aws.CognitoIdentity({ region: process.env.AWS_REGION, logger: console });
const Logins = {
'my-logins': username
};
const result = await cognito.getOpenIdTokenForDeveloperIdentity({
IdentityPoolId: process.env.IDENTITY_POOL_ID,
Logins,
TokenDuration: 20,
}).promise();
return result.Token;
}
console.log('================= Start ====================');
getToken('test-username').then(() => {
console.log('================= Done ====================');
process.exit(0);
});
[AWS cognitoidentity 200 4.696s 0 retries] getOpenIdTokenForDeveloperIdentity({
IdentityPoolId: 'us-east-2:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx',
Logins: { 'my-logins': 'test-username' },
TokenDuration: 20
})
I also tried with the aws-sdk version 2.733 and node:12.18.3-alpine3.12, the same result received
Hi @ajredniwja,
Was unavailable for a bit. I tried your suggestion, but I did not see any improvement either.
It still takes 4+ seconds.
[[A[AWS s3 200 4.546s 0 retries] putObject({
Body: <Buffer 7b 22 69 64 22 3a 22 74 65 73 74 49 64 31 22 2c 22 6b 65 79 22 3a 22 76 61 6c 75 65 31 22 2c 22 6b 65 79 32 22 3a 22 6c 6f 6f 6f 6f 6f 6f 6f 6f 6f 6f ... 43 more bytes>,
Bucket: 'YOUR_TEST_BUCKET',
Key: 'TEST_WRDtestId1.json',
ContentType: 'application/json'
})
The instance-id should be: i-0bd45f49e548fc04b
Around 14:45 Wed 26 Aug CEST.
The instance will most likely be destroyed tonight to save cost for the test environment.
If you need an instance to be up, I will try to get permission on a different environment.
Kind regards,
Wilfred
This appears to be related to IMDSv2 being the default starting with 2.575.0. I can see that with the new fetchMetadataToken method added in that release.
I can reproduce this issue by executing curl -XPUT 'http://169.254.169.254/latest/api/token' -H 'x-aws-ec2-metadata-token-ttl-seconds: 21600' inside a Docker container running in EC2; it never responds. In the SDK, it times out after 1 second, and retries 3 times, which is where the 4 seconds comes from. If I execute that command on the host, it responds immediately.
The reason why IMDSv2 does not work from inside a Docker container is explained here: https://stackoverflow.com/a/62326320/13124514
I've also noticed that setting AWS.MetadataService.disableFetchToken = true doesn't actually modify self.disableFetchToken inside the loadCredentials method, which is why setting it doesn't do anything. If it did set it correctly, that would resolve our issue by keeping us on IMDSv1.
So it seems there are two issues at play:
curl -XPUT 'http://169.254.169.254/latest/api/token' from inside a Docker container never respondsdisableFetchToken = trueEDIT: This is probably unsupported, but it works: AWS.MetadataService.prototype.disableFetchToken = true
Hi @ajredniwja,
I tried @mhassan1 workaround and it does solve the problem. Could you please take a look at @mhassan1 explanation. If it makes sense to you, please escalate the issue as bug.
I followed the discussion/links from the stackoverflow of @mhassan1 (thanks!)
In https://github.com/aws/aws-sdk-ruby/issues/2177 they also discuss the relation with running your own kubernetes cluster which we are using. Did not find a definitive solution yet (pinning the SDK I don't like).
@chartrand22 also talks about the IMDSv2 in https://github.com/aws/aws-sdk-js/issues/3024
Most helpful comment
This appears to be related to IMDSv2 being the default starting with
2.575.0. I can see that with the newfetchMetadataTokenmethod added in that release.I can reproduce this issue by executing
curl -XPUT 'http://169.254.169.254/latest/api/token' -H 'x-aws-ec2-metadata-token-ttl-seconds: 21600'inside a Docker container running in EC2; it never responds. In the SDK, it times out after 1 second, and retries 3 times, which is where the 4 seconds comes from. If I execute that command on the host, it responds immediately.The reason why IMDSv2 does not work from inside a Docker container is explained here: https://stackoverflow.com/a/62326320/13124514
I've also noticed that setting
AWS.MetadataService.disableFetchToken = truedoesn't actually modifyself.disableFetchTokeninside theloadCredentialsmethod, which is why setting it doesn't do anything. If it did set it correctly, that would resolve our issue by keeping us on IMDSv1.So it seems there are two issues at play:
curl -XPUT 'http://169.254.169.254/latest/api/token'from inside a Docker container never respondsdisableFetchToken = trueEDIT: This is probably unsupported, but it works:
AWS.MetadataService.prototype.disableFetchToken = true