Sharp: Provide pre-pipeline function to modify options based on input image metadata

Created on 23 Jun 2015  Â·  12Comments  Â·  Source: lovell/sharp

This could seriously increase the power of Stream-based image processing.

Here's an example of how halving an image's dimensions could work:

// PROPOSED API NOT YET AVAILABLE
var halver = sharp().before(function(metadata) {
  this.resize(metadata.width / 2, metadata.height / 2);
});
readableStream.pipe(halver).pipe(writableStream);
// PROPOSED API NOT YET AVAILABLE

The existing metadata logic can also be improved to require only the first few hundred bytes of a Stream.

In addition, adding a "playbook" of example uses for this feature to the docs would be great too. There's a variable/percentage extract/crop under discussion at #205 that this should allow for.

enhancement

Most helpful comment

Any news for this feature ?

All 12 comments

Is the before function not been supported yet?

@Lanfei Not yet, sorry, but please do subscribe to this issue for updates.

Okay.

HI @lovell, I think this Issue addresses what I'm trying to solve currently as well – I've subscribed for notifications for when .before is implemented.

In the meantime, might you have a suggestion for how I can verify that a stream contains valid image data before piping it into a transform? I'm using request to pipe data from various URLs that may or may not point at valid images.

I tried piping into a sharp().metadata() instance, handling any errors, and then continue to pipe the stream from inside the success callback (which would seem to imply that it successfully read metadata from a valid image), but I get the error: You cannot pipe after data has been emitted from the response. I understand why this is happening but haven't come up with a solution to pause the stream and then resume from inside the callback, for instance.

@jaredscheib This feature (possibly with #298) should allow you to achieve what you need, yes. In the meantime, the least complex (and race-condition free) method of doing so is probably to store the streamed data in a Buffer.

What is the status of this feature?

@elliotfleming This is yet to be implemented. As always I'm happy to provide guidance to anyone interested in tackling it.

It'd be really great to have this - I'd like to check image dimensions and throw error if they don't meet criteria before transforming my stream.

It would be very nice to have either this or be able to change image scale, specifically some method that would work like: .scale(2) to double the image dimensions.

Any news for this feature ?

Does the sharp.metadata() function read the entire stream into memory or will a followup resize operation currently result in a re-read of the entire stream?

I am currently using something like:

const image = stream.pipe(sharp())
const { width, height } = await image.metadata()
upload(image.resize(...))

EDIT: I see its waiting for the entire file to be read into memory before calculating the metadata... which should not even be allowed for streams tbh.

Could this be solved by having a second function called .dimensions() which only returns the dimensions instead of all of the metadata?

It will be nice to have this feature but I'm using image-size-stream as an alternative for now.

Example:

const fileStream = fs.createReadStream(path)
const sizeStream = new ImageDimensionsStream()
const bufferStream = new PassThrough().pause()

let stream = fileStream.pipe(sizeStream).pipe(bufferStream)

sizeStream.on('dimensions', ({ width, height }) => {
    // take decision based on image size
    if (W !== 0 && H !== 0 && (X !== 0 || Y !== 0 || width !== W || height !== H)) {
        const extractStream = sharp()
            .extract({ left: X, top: Y, width: W, height: H })
        stream = stream.pipe(extractStream)
    }
    bufferStream.resume()
    res.send(stream)
})

Using a paused PassThrough stream as buffer while ImageDimensionsStream returns the image dimensions works out nicely.

Was this page helpful?
0 / 5 - 0 ratings