Python-docs-samples: Request payload size exceeds the limit: 157286 bytes.

Created on 5 Dec 2017  Â·  12Comments  Â·  Source: GoogleCloudPlatform/python-docs-samples

Hi,

  • In which file did you encounter the issue?
    /tree/master/ml_engine/online_prediction

  • Did you change the file? If so, how?
    I am trying to run the predict_json() for a reshaped image of size (1,244,244,3). It seems after serializing the JSON size is in order of 6.5 MB. So I am getting this error:

Request payload size exceeds the limit: 157286 bytes.

Which I think, directly is connected to the Google cloud data framework. I know it is not an issue with this directory but I am wondering how I can solve it within the online prediction framework. I need to do the online and real-time prediction within the python framework. So, It won't be possible to save the images beforehand on the storage and pass the path to the data.

How do others get around doing online predictions for large data? Or I am doing something wrong ...
even If I want to store the data in real-time how would it be possible within this load-size limit?

I mean let assume a scenario that I am reading the image from a device in realtime I need the prediction result for it, so, I should either send them with my request to the cloud for the prediction or I should send them with a request to save the data and then do the prediction.

Thanks.

Fixit ML

All 12 comments

@dizcology or @elibixby can one of you look at this (or find the right person)?

Hey Azizi, there's no way to increase the payload size. However for images
you don't want to send images as JSON anyway, since it's an extremely
inefficient encoding. Instead you should encode the jpeg image bytes in
base64 and pass the bytes to the Cloud ML Online Prediction service as
descrubed here:
https://cloud.google.com/ml-engine/docs/v1/predict-request#data-encoding and
then use the https://www.tensorflow.org/api_docs/python/tf/image/decode_jpeg
operation in your serving graph with a string placeholder to decode the
strings. You can see an example of that here:
https://github.com/elibixby/magenta/blob/bda6484f3aa6e76192dcfccaf173eda798945c9c/magenta/models/image_stylization/image_stylization_saved_model.py#L45

On Tue, Dec 5, 2017 at 1:18 PM, Jon Wayne Parrott notifications@github.com
wrote:

@dizcology https://github.com/dizcology or @elibixby
https://github.com/elibixby can one of you look at this (or find the
right person)?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1247#issuecomment-349444652,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGSpXUDTcYT7SYXmpfNM0ax1FnM9-nREks5s9bM4gaJpZM4Q29iF
.

Hey Eli @elibixby ,

Thanks for your help. I used this code to encode my image:

# Import the base64 encoding library.
import base64
# Pass the image data to an encoding function.
def encode_image(image):
  image_content = image.read()
  return base64.b64encode(image_content)

Then I am making the request with:

    response = service.projects().predict(
        name=name,
        body={'instances': [
            {'b64': base64.b64encode(example_bytes).decode('utf-8')}
            for example_bytes in example_bytes_list
        ]}
    ).execute()

Which I think should work fine, however, I still do have the same error. I've changed the size of the image to make sure that the problem is not coming from somewhere else. With the new size ((1,10,10,3)!) the request is passing to the network without an error but this is not the acceptable input size for the network. :)

Thanks,
Shek

Why are you encoding twice? I don't understand the relationship between your two code snippets.

@rhaertel80 is there a way to raise the request size cap?

@elibixby Thanks for the answers. What's the recommendation for speech recognition? I have file of an hour that I'd like to transcribe - that would be impossible to fit into 10MB (which seems to be the limit on the speech recognition side).

I'm having the same issue here, for an autoencoder.. is there a way to increase this limit??

@migtissera Usually if you do the encoding/decoding correctly there is no need for the increased limit.

For speech recognition, I have found this in stack overflow. https://stackoverflow.com/questions/51601697/invalid-argument-request-payload-size-exceeds-the-limit-10485760-bytes

The accepted answer is that this is a limit placed on free trials. Disappointing.

See the comment about the limit for free trials above.

@AziziShekoofeh - what was the solution for your example above?

@donbonjenbi Sorry for the late response, to be honest I don't recall at the moment, it passed more than 2 years now, but as I recall, the solution was aligned with what @elibixby has mentioned, I was doing the encoding twice, when I was setting the requested encoding in the body message it was enough, and there is no need for more encoding using a separate function. Also, I recall there was some issues on decoding side due to multiple encoding.

I am getting this error, but I only sent a 4kb piece of data through. I am definitely not sending anything over that limit. What to do?

Was this page helpful?
0 / 5 - 0 ratings