I have been trying to test S3 file upload function in variety of browsers. In Chrome,Firefox, IE,and Safari everything works fine. But it doesn't work in Edge. It give me the error "Signature we calculated doesn't match Signature that you provided"
Any helps would be much appreciated.
Thanks in advance.
@wbyafg
Can you share what version of the SDK you're using, and a code snippet that you're testing with?
Also, what region are you testing the bucket in?
@chrisradek
earlier I was using 2.2.38 but after I encounter this issue then I upgrade to 2.3.0, also not working.
here is my configuration
try {
AWS.config.update({
accessKeyId: 'XXXX',
secretAccessKey: 'XXXX'
});
AWS.config.region = 'ap-southeast-1';
bucket = new AWS.S3({params: {Bucket: 'XXX'}});
} catch (e) {
log.error(e.stack);
}
var params = {Key: location + '/' + file.name, ContentType: file.type+';charset=UTF-8', Body: file, ACL: "public-read"};
var upload = bucket.upload(params, function (err, data) {...}
@wbyafg
I tried your code in Edge and haven't been able to reproduce your error. I tested by uploading a ~10MB PDF file. I also tested using both sigv2 (default) and sigv4 signers.
Can you share the actual error you're getting? Also, do you get the same message for all S3 operations? Are you able to try any other operations?
hi @chrisradek
here is the actual error:
stack "SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.\n
I didn't try any other operations.
@chrisradek
I have found the cause but can not really explain it. In Edge browser, the signature that goes in 'Authorization' header somehow was calculated wrongly. Instead a normal 28-dights signature it actually return me a 36-dights signature. with a very weird 8-dights string(something like'BN8AAAAA') appended at the end. After i remove the last 8-dights then it works fine.
Maybe you can clear my doubt?
I'm seeing this same issue. Happens in latest IE Edge (25.10586.0.0)
Here is an example of the header it's sending, the same problem as @wbyafg reported. (this is an old token that is expired so I'm OK with posting it here)

And here is the response:

Operations against DynamoDB work fine. The S3 operation fails. I was uploading a 1MB jpeg file. I'm using aws-sdk npm package version 2.2.39. The region is us-east-1.
Digging into this a bit more, it looks like the HMaC signature component is actually being created with the nodejs crypto module: https://github.com/aws/aws-sdk-js/blob/e8bd91efa373d943edbf6a0498dc1e386459c64f/lib/util.js#L3
So maybe the bug is actually upstream?
@kevin1024
The browser version of the SDK will use crypto-browserify instead of the node.js module, but that's not a bad place to look. What version of Edge are you using? I've tried reproducing on a windows 10 machine but haven't encountered this yet.
IE Edge version 25.10586.0.0
I am using BrowserStack to reproduce
Holy crap this rabbit hole goes deep. I think the bug might actually be substack/string_decoder.
Actually, weird, I think this is the repo: https://github.com/rvagg/string_decoder
OMG now I'm all the way down to nodejs buffer.toString. WHAT IS HAPPENING.
The problem appears to be happening here: https://github.com/feross/buffer/blob/master/index.js#L792
IE is being detected as having TYPED_ARRAY_SUPPORT and all hell breaks loose.
TLDR the fix is to upgrade the buffer library to the latest version. Here is the magic stuff to add to your npm-shrinkwrap file:
"buffer": {
"version": "4.5.1",
"from": "buffer@>=4.0.0 <5.0.0",
"resolved": "https://registry.npmjs.org/buffer/-/buffer-4.5.1.tgz",
"dependencies": {
"isarray": {
"version": "1.0.0",
"from": "isarray@>=1.0.0 <2.0.0",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz"
}
}
}
@kevin1024
Thanks for taking a deep look into the issue! Turns out my machine is using a newer version of Edge (37.14136) and I'm not seeing the issue there. I'll get my hands on another Windows 10 machine to test there as well.
Regardless, probably a good idea for us to update our dependencies. I believe the version of browserify we are using is fairly old, so I'll see what issues we encounter by updating that.
eta on this?
@kentor
There was one issue with updating browserify to the latest version that was related to an issue in the process module that browserify depends on. The module has been updated, so just need it to be published again.
https://github.com/defunctzombie/node-process/pull/57
@kevin1024 @kentor
With the latest version of the SDK, 2.3.11, we updated the version of browserify we use to build the browser version of the SDK. Can you try to reproduce your issue using the latest version of the SDK and report back if it is still broken?
Closing this issue due to inactivity. The browser sdk was updated to use the version of buffer that was reported to work above.
Thanks for the fix!
I'm getting this exact error on Edge 25.10586.0.0, using AWS SDK v2.3.15.
Just tried with SDK v2.4.0 and getting same error (still on Edge 25.10586.0.0).
My authorization header looks like this, 20 characters : 27 characters =l0kAAAAA
Authorization: AWS XXXXXXXXXXXXXXXXXXXX:XXXXXXXXXXXXXXXXXXXXXXXXXXX=l0kAAAAA
Same issue is happening to me on v2.4.0, even after rolling back to v2.3.11 - extra characters at the end of the Authorization header causing the request to fail.
This appears to not be an issue in v2.1.50.
@elidupuis & @cdl
Thanks for reporting that the issue is still happening. @kevin1024, was it fixed for you?
I'm still just running off 2.2.39 with buffer npm-shrinkwrapped to 4.5.1 and it works great!
Just tested SDK 2.4.0 on Microsoft Edge 38.14366.0.0 (the current preview version) and I'm still getting the same error with the extra characters on the Authorization header (https://github.com/aws/aws-sdk-js/issues/952#issuecomment-226871401).
@elidupuis
Can you share a snippet of the code you're running? I'm using testing using Edge 25.10586.0.0 and the version 2.4.0 of the SDK. I'm running the s3.upload command with a small file in us-west-2 using sigv4 signing and am not able to reproduce. Are you uploading a large file? Is this error happening on s3 operations that aren't uploads?
https://devcenter.heroku.com/articles/s3-upload-node shows the problems
function getSignedRequest(file){
const xhr = new XMLHttpRequest();
xhr.open('GET', `/sign-s3?file-name=${file.name}&file-type=${file.type}`);
xhr.onreadystatechange = () => {
if(xhr.readyState === 4){
if(xhr.status === 200){
const response = JSON.parse(xhr.responseText);
uploadFile(file, response.signedRequest, response.url);
}
else{
alert('Could not get signed URL.');
}
}
};
xhr.send();
}
Then:
function uploadFile(file, signedRequest, url){
const xhr = new XMLHttpRequest();
xhr.open('PUT', signedRequest);
xhr.onreadystatechange = () => {
if(xhr.readyState === 4){
if(xhr.status === 200){
document.getElementById('preview').src = url;
document.getElementById('avatar-url').value = url;
}
else{
alert('Could not upload file.');
}
}
};
xhr.send(file);
}
I'm using aws-sdk version 2.4.2 with Edge 25.10586.0.0 I'm seeing this with any image file upload. I.e. a png
The server code is as follows.
app.get('/sign-s3', (req, res) => {
const s3 = new aws.S3();
const fileName = req.query['file-name'];
const fileType = req.query['file-type'];
const s3Params = {
Bucket: S3_BUCKET,
Key: fileName,
Expires: 60,
ContentType: fileType,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', s3Params, (err, data) => {
if(err){
console.log(err);
return res.end();
}
const returnData = {
signedRequest: data,
url: `https://${S3_BUCKET}.s3.amazonaws.com/${fileName}`
};
res.write(JSON.stringify(returnData));
res.end();
});
});
@kiraLinden
Thanks for the additional code, I'll take a look!
Is there any update on this? This is a major issue for us.
@kiraLinden
I still haven't been able to reproduce the issue on multiple windows 10 devices, and we've updated our browserify to attempt to address this with mixed messages.
Can you provide which version of the SDK you're using, and whether it is a custom build, from our CDN (using a script tag), from bower/npm, or from the browser builder? Can you also share if you're seeing this with files larger than 5 MB or for all files, and if you're specifying a signatureVersion and the region for your S3 client?
I'm wondering if a different version of the buffer module browserify uses is in place based on the way the browser SDK is created. That's something I can look into, but any additional info you can provide to narrow this down is greatly appreciated.
I specified the above: "I'm using aws-sdk version 2.4.2 with Edge 25.10586.0.0 I'm seeing this with any image file upload. I.e. a png (not a large file)." As per the above message, we're just using getSignedUrl and no signatureVersion. A bucket is being specified, not a region. Everything you need to replicate should be in the tutorial link that I put above. @chrisradek
So this is a very different issue than what this thread is concerning. The original issue was with using the s3.upload method from within Edge. You're creating a signed url on the server, and then using it on the front-end.
What region is your bucket in? If your bucket is in a different region than what was specified when you created your S3 client, you may also see that signature mismatch error. If you haven't specified a region (I don't see any mention of that in the tutorial), then by default the SDK uses us-east-1 with S3. The SDK knows how to handle redirects if the wrong region was specified when you call S3 operations directly, but can't handle them when you're generating a URL.
Can you try instantiating your S3 client with the same region that the bucket is located in, and then generating the signed url?
The bucket is in us-west-2 and setting a region does nothing to alleviate this issue (it's set to match the region the bucket is in). This issue is also only occurring on edge, every other browser handles the same flow, correctly,
@kiraLinden
In your case, it looks like Edge isn't sending the Content-Type header when it uploads an image. Other browsers, like Firefox, appear to send it based on the type of the file passed into xhr.send().
I was able to upload a file once I changed your uploadFile function to add
xhr.setRequestHeader('content-type', file.type);
after the xhr.open call.
You could also remove ContentType from your s3 params when you're generating your signed url so that it isn't enforced.
Thanks! It fixed the issue.
Removing ContentType from signed url fixed it for me.
Closing the issue since this appears to be resolved.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.
Most helpful comment
@kiraLinden
In your case, it looks like Edge isn't sending the
Content-Typeheader when it uploads an image. Other browsers, like Firefox, appear to send it based on the type of the file passed intoxhr.send().I was able to upload a file once I changed your
uploadFilefunction to addafter the
xhr.opencall.You could also remove
ContentTypefrom your s3 params when you're generating your signed url so that it isn't enforced.