aws-sdk version: 2.213.1
node version: 6.12.3
I'm trying to upload multiple parts using s3.upload.
However I always get TimeOut error.
When I use the simple operation s3.putObject works normally, my internet is in good condition.
My code:
AWS.config.update({
accessKeyId: uploadOptions.credentials.AccessKeyId, secretAccessKey: uploadOptions.credentials.SecretAccessKey, sessionToken: uploadOptions.credentials.SessionToken
});
var body = fs.createReadStream('./wordpress-4.9.4-pt_BR.zip').pipe(zlib.createGzip());
var s3 = new AWS.S3({ logger: console });
var params = {
Body: body,
Bucket: uploadOptions.bucket,
Key: uploadOptions.key
};
s3.upload(params, function (err, data) {
if (err) console.log("An error occurred", err);
console.log("Uploaded the file at", data.Location);
});
Return:
`
[AWS s3 200 1.197s 0 retries] createMultipartUpload({ Bucket: 'secret',
Key: 'root/d04gzWZVPQ/d04gzWZVPQ/B1YmVaXxL2.bin' })
[AWS s3 undefined 484.259s 3 retries] uploadPart({ Body:
PartNumber: 1,
Bucket: 'upaqui',
Key: 'root/d04gzWZVPQ/d04gzWZVPQ/B1YmVaXxL2.bin',
UploadId: 'jAm_Cwia8gXfR4wpILPatWgOBSYzqZdG7IFdbtEjV.lq_TjOzDVk7M.JuSa.DN7rCEU8rml42Ttet6IYSvIHkwAdDXtPyO8oHHdSno3y_gXy5fyEsL7gDn25Dwc6XmYW0B5TN6N7QyUSNXSQRTr3TA--' })
An error occurred { TimeoutError: Connection timed out after 120000ms
at ClientRequest.
at ClientRequest.g (events.js:292:16)
at emitNone (events.js:86:13)
at ClientRequest.emit (events.js:185:7)
at TLSSocket.emitTimeout (_http_client.js:630:10)
at TLSSocket.g (events.js:292:16)
at emitNone (events.js:86:13)
at TLSSocket.emit (events.js:185:7)
at TLSSocket.Socket._onTimeout (net.js:338:8)
at ontimeout (timers.js:386:11)
message: 'Connection timed out after 120000ms',
code: 'TimeoutError',
time: 2018-03-25T23:26:23.711Z,
region: 'us-east-1',
hostname: 'upaqui.s3.amazonaws.com',
retryable: true }`
I am using a temporary credential made with these permissions:
var policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::" + Environment.config.bucket.name
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:ListMultipartUploadParts",
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload"
],
"Resource": [
"arn:aws:s3:::scret/root/" + HashId.encode(business_id) + "/" + HashId.encode(user_id) + "/*"
]
}
]
};
What could be happening? can you help me?
I noticed that the problem happens in my house only, would it be a problem in the modem? What could be happening...
@leonetosoft The default request timeout used is two minutes, and the XmlHttpRequest interface will cancel any request that exceeds this maximum duration. Multipart upload requires that each chunk be at least 5 MB (i.e., the chunks being uploaded cannot be made any smaller), so you may need to increase the timeout to something larger.
@leonetosoft I have found on slow internet connections mulitpart uploads fail unless limited to 1 part... Using Chrome's network throttling, on Slow 3G connections I limit the simultaneous parts to 1.
In my code I detect upload speed and then adjust the number of parts according to the user's speed. Where the speed is super slow, only one part at a time.
Another issue you may encounter - If the upload lasts longer than an hour you will need to refresh credentials.
@jeskew @paolavness
Thanks for the answers, I set {queueSize: 1} and now I'm not having the problem.
For example, if I am sending a 1GB file, then I close the application, when I reopen I want to continue from where it stopped ... The s3 is able to recognize a part that was already complete and move on to next ...?
All the resources that I find always restart from 0. :(
@jeskew @paolavness
One more question, I noticed that there is a lot of savings using the gzip stream, is there any way to send the stream through a browser upload from the generation of an upload policy? Or just using multipart upload?
@leonetosoft I have not looked into gzipping the upload yet but will be. I'd love to hear if you find anything about this.
@leonetosoft
Refer to https://github.com/aws/aws-sdk-js/issues/1435
The SDK does not perform the compression itself, but @jeskew offers a suggestion that could potentially work for you: https://github.com/aws/aws-sdk-js/issues/1435#issuecomment-290448968
Let us know if you have any further questions.
This issue has been automatically closed because there has been no response to our request for more information from the original author. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have or find the answers we need so that we can investigate further.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.
Most helpful comment
@leonetosoft I have found on slow internet connections mulitpart uploads fail unless limited to 1 part... Using Chrome's network throttling, on Slow 3G connections I limit the simultaneous parts to 1.
In my code I detect upload speed and then adjust the number of parts according to the user's speed. Where the speed is super slow, only one part at a time.
Another issue you may encounter - If the upload lasts longer than an hour you will need to refresh credentials.