Sharp: Is the input image processed if it already meets constraints?

Created on 7 Jun 2020  路  2Comments  路  Source: lovell/sharp

Just a simple question. Will sharp re-process the input image if it already meets the provided constraints?

For example, given the following command:

const output = await sharp(inputBuffer, {pages: -1}).webp().toBuffer({ resolveWithObject: true })

will it somehow reconvert the input to webp if it was alread a webp file?

Or, for the following command:

const output = await sharp(inputBuffer, {pages: -1}).resize(200, 200, { fit: 'cover' }).toBuffer({ resolveWithObject: true })

will it somehow resize the input image even if it already is a 200px square?

What are you trying to achieve?

I'm trying to design a server side resizing and conversion to webp of uploaded images. So I'm trying to figure out if I should check myself if it needs resizing or conversion before applying sharp commands to it, or if sharp will handle it for me and only run when needed.

For now I'm assuming I have to do these verifications myself to ensure sharp will only run when and with what is needed.

Here's a block of my code where I do that:


    const imageBase64 = profileImageData.base64.split('base64,').pop()!;
    const inputBuffer = Buffer.from(imageBase64, 'base64');

    const aproximateSize = imageBase64.length * (3/4);

    let xlrOutputTask: Promise<SharpBufferOutput> = Promise.resolve({ data: inputBuffer, info: { format: profileImageData.fileExtension, width: profileImageData.width, height: profileImageData.height, size: aproximateSize } });
    let hrOutputTask: Promise<SharpBufferOutput> = xlrOutputTask;

    if (profileImageData.fileExtension !== 'webp') {

        if (profileImageData.width > 200 || profileImageData.height > 200 || profileImageData.width !== profileImageData.height) {

            xlrOutputTask = sharp(inputBuffer, { pages: -1 }).resize(200, 200, { fit: 'cover', withoutEnlargement: true }).webp().toBuffer({ resolveWithObject: true });

            if (aproximateSize > 700000) {
                hrOutputTask = sharp(inputBuffer, { pages: -1 }).resize(3000, 3000, { fit: 'inside', withoutEnlargement: true }).webp().toBuffer({ resolveWithObject: true });
            }
            else {
                hrOutputTask = sharp(inputBuffer, { pages: -1 }).webp().toBuffer({ resolveWithObject: true });
            }
        }
        else {
            xlrOutputTask = sharp(inputBuffer, { pages: -1 }).webp().toBuffer({ resolveWithObject: true });
            hrOutputTask = xlrOutputTask;
        }
    }
    else {

        if (profileImageData.width > 200 || profileImageData.height > 200 || profileImageData.width !== profileImageData.height) {

            xlrOutputTask = sharp(inputBuffer, { pages: -1 }).resize(200, 200, { fit: 'cover', withoutEnlargement: true }).toBuffer({ resolveWithObject: true });

            if (aproximateSize > 700000) {
                hrOutputTask = sharp(inputBuffer, { pages: -1 }).resize(3000, 3000, { fit: 'inside', withoutEnlargement: true }).toBuffer({ resolveWithObject: true });
            }
        }
    }
question

Most helpful comment

The short answer is yes. The slightly longer answer is that, even if no resizing is required to meet the target dimensions, lossy formats will be subject to an decode/encode roundtrip so the output is likely to differ slightly from the input.

All 2 comments

The short answer is yes. The slightly longer answer is that, even if no resizing is required to meet the target dimensions, lossy formats will be subject to an decode/encode roundtrip so the output is likely to differ slightly from the input.

If the data will be decoded/encoded, the performance should be better if I keep doing these verifications myself right?!

Thank you for your answer. (and this awesome node.js package aswel)

Was this page helpful?
0 / 5 - 0 ratings