IMPORTANT ERROR BUT TREATED AS "INFO" TYPE
NodeJS 8
trying fetch request, data is missed, but "INFO" type
Please help ASAP
"Error { Error: write EPROTO 47617330951040:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:../deps/openssl/openssl/ssl/s23_lib.c:177:
at WriteWrap.afterWrite [as oncomplete] (net.js:868:14)
message: 'write EPROTO 47617330951040:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:../deps/openssl/openssl/ssl/s23_lib.c:177:\n',
errno: 'EPROTO',
code: 'NetworkingError',
syscall: 'write',
region: 'us-west-2',
hostname: '***.s3.us-west-2.amazonaws.com',
retryable: true,
time: 2019-04-18T02:42:09.066Z,
statusCode: 400 }"
@10001oleg
Can you please submit full details of how you got this error, including SDK version, Node version, code example and any other details of your environment that may be pertinent?
Sure!
"aws-sdk": "^2.437.0",
"node-fetch": "^2.3.0"
nodeJS 8
Env - Google Cloud Function, unlimited workers, clean logs
File size ~ 1Mb
It works in 98% cases
`const fetch = require('node-fetch');
//stream URL to S3
const AWS = require('aws-sdk');
AWS.config.update({
accessKeyId: '*',
secretAccessKey: '*,
region: 'us-west-2',
});
const s3 = new AWS.S3();
fetch(Fileurl)
.then( res => {
s3.upload(
{
Bucket: 'ocr-vision-result',
Body : res.body,
Key : filename
},
(err, data)=> {
//handle error
if (err) {
console.log('Error', err);
}
//success
if (data) {
console.log('Uploaded in:', data.Location);
}
}
);
})`
FILE URL EXAMPLE
const Fileurl = 'https://storage.googleapis.com/result/56.pdf%2F8copy-output-101-to-120.json?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=marine-alchemy-236620%40appspot.gserviceaccount.com%2F20190417%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20190417T212103Z&X-Goog-Expires=86400&X-Goog-SignedHeaders=host&X-Goog-Signature=8a07e6dd972316e9d0df7a5494e11070bf47984cf8ae2561cf4cbb81e49e0a9b8dc37782f7a5df958c9d4bd558a99321915ece2f3592eac3ce5985830bec9df12395d748a1e50c511b055939674b0f9cf72aa19425fcfce96d8c423eeb81870ceeaac212d0faac2a67947e8d283b9dc0f986eba2fecdcf72500ee2fba0aaca9fa633069843faebaf071c16fa25b9be67c631bdc790e2100055c76f3f2d2e68444c67d18c7111c2cfa8be95fb483371d2301090e297608977dc8cd475933222d8b8ba662aebaae7387c1eed8d1b8768bff1d60270794ff7919cf8ae5464dc6385acd8f2f787c2e930769bb430bcd601f3ed2d31f26e536d62d6f3a8d390fa5880'
NEW ERROR under the same conditions
PLEASE PLEASE HELP ME HELP ME !!!
"Error { Error: write ECONNRESET
at WriteWrap.afterWrite [as oncomplete] (net.js:868:14)
message: 'write ECONNRESET',
errno: 'ECONNRESET',
code: 'NetworkingError',
syscall: 'write',
region: 'us-west-2',
hostname: '*****.s3.us-west-2.amazonaws.com',
retryable: true,
time: 2019-04-18T22:05:04.671Z,
statusCode: 400 }
@10001oleg,
Are all of your files only ~1mb?
If they're all less than 5mb, the upload operation is sending them as a single part.
Can you switch to using the putObject operation instead to see if that eliminates this issue? You'll need to specify the content length.
Do you know, what is the max allowed ContentLength ? from params
s3.upload(params, options, function(err, data) {
console.log(err, data);
});
ContentLength — (Integer)
Size of the body in bytes. This parameter is useful when the size of the body cannot be determined automatically.
Thank you!
Some files ~6Mb, ~15Mb and ~9Mb
### Can I use ManagedUpload for my case?
Creating an uploader with concurrency of 1 and partSize of 10mb
var upload = new AWS.S3.ManagedUpload({
partSize: 10 * 1024 * 1024, queueSize: 1,
params: {Bucket: 'bucket', Key: 'key', Body: stream}
});
Options Hash (options):
params (map) — a map of parameters to pass to the upload requests. The "Body" parameter is required to be specified either on the service or in the params option.
queueSize (Number) — default: 4 — the size of the concurrent queue manager to upload parts in parallel. Set to 1 for synchronous uploading of parts. Note that the uploader will buffer at most queueSize * partSize bytes into memory at any given time.
partSize (Number) — default: 5mb — the size in bytes for each individual part to be uploaded. Adjust the part size to ensure the number of parts does not exceed maxTotalParts. See minPartSize for the minimum allowed part size.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3/ManagedUpload.html
@10001oleg
The upload operation uses the ManagedUpload.
Do you see errors on the 1mb files? Or only on the larger files?
only on the larger files.
I added data destructuring, previously transforming them into JSON and then back into Buffer, and all this during the fetch request - this solved my problem, since the data size decreased from 6.7 MB to 0.046 MB.
fetch(Fileurl)
.then(res => res.json())
.then(data => {
const {responses} = data;
const textResult = {text: []};
responses.map(({fullTextAnnotation: {text}}) => {
textResult.text.push(text);
});
dataResult = Buffer.from(JSON.stringify(textResult));
s3.upload(
{
Bucket: '*****',
Body: dataResult,
Key: filename,
},
(err, data) => {
//handle error
if (err) {
console.log('Error', err);
}
//success
if (data) {
console.log('Uploaded in:', data.Location);
}
}
);
});
Thanks for following up.
I'll close out this issue for now, but can re-open if needed.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.
Most helpful comment
only on the larger files.
I added data destructuring, previously transforming them into JSON and then back into Buffer, and all this during the fetch request - this solved my problem, since the data size decreased from 6.7 MB to 0.046 MB.
fetch(Fileurl) .then(res => res.json()) .then(data => { const {responses} = data; const textResult = {text: []}; responses.map(({fullTextAnnotation: {text}}) => { textResult.text.push(text); }); dataResult = Buffer.from(JSON.stringify(textResult)); s3.upload( { Bucket: '*****', Body: dataResult, Key: filename, }, (err, data) => { //handle error if (err) { console.log('Error', err); } //success if (data) { console.log('Uploaded in:', data.Location); } } ); });