I am creating a workbook in a Node.js program. Here is an example:
const XLSX = require('xlsx');
const wb = XLSX.utils.book_new();
const new_ws_name = "SheetJS";
const ws_data = [
[ "S", "h", "e", "e", "t", "J", "S" ],
[ 1 , 2 , 3 , 4 , 5 ]
];
const ws = XLSX.utils.aoa_to_sheet(ws_data);
XLSX.utils.book_append_sheet(wb, ws, new_ws_name);
Now I want to upload this file as an xlsx file to an S3 bucket on AWS. One way would be that I create a temporary local file using:
XLSX.writeFile(wb, 'out.xlsx');
And then upload this out.xlsx file to S3 by creating a readStream from it. Isn't there any direct method where I can create a read stream from my workbook and directly upload it to S3?
XLSX.write(wb, {type:'buffer', bookType:'xlsx'}) generates a Buffer of the export in the XLSX format, which you can directly upload to s3 using the putObject function as shown in the aws-sdk example.
Hi, I'm stuck with the same issue. I'm trying to upload the data straight from my frontend
The code uploads the file to the s3 server. But I always get a corrupted file when downloading it.
const workbook = XLSX.utils.book_new();
const worksheet = XLSX.utils.json_to_sheet(this.data, {
header: ["lastName", "firstName"]
});
XLSX.utils.book_append_sheet(workbook, worksheet, "feuille1");
const file = XLSX.write(workbook, {
type: "buffer",
bookType: "xlsx",
bookSST: false
});
aws.config.update({
accessKeyId: "xxx",
secretAccessKey: "xxx"
});
const s3 = new aws.S3();
try {
const data = await s3
.putObject({
Bucket: "xxx",
Key: "dataSample.xlsx",
ACL: "public-read",
ContentType:
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
Body: file
})
.promise();
console.log("data", data);
} catch (error) {
console.log("errir", error);
}
}
Do you know what I am missing, I have tried buffer base64, and many other ways.
Most helpful comment
XLSX.write(wb, {type:'buffer', bookType:'xlsx'})generates a Buffer of the export in the XLSX format, which you can directly upload to s3 using theputObjectfunction as shown in theaws-sdkexample.