Suddenly started getting the following message when attempting to deploy:
> Successfully tagged build:D1d9rfEKqwhWwN6MR3MRbqiX_1534127071
> â–² Assembling image
> â–² The built image size (312.4M) exceeds the 100MiB limit
> Error! Build failed
I changed my docker image to node:8-slim but am still getting the same message, except I'm closer to the limit threshold (â–² The built image size (117.2M) exceeds the 100MiB limit
). I'm not sure why this is suddenly an issue and I also don't know how to remedy; I am using one npm package that is rather huge, but it's not optional. Do the paid plans of now allow for larger images? The prining page does not state anything about image size limits.
The 100mb image size restriction is a hard-limit for Now Cloud v2 (see https://zeit.co/blog/serverless-docker). I recommend that you use node:8-alpine instead as a base image, since that starts out at ~20mb.
@TooTallNate Is there no way to increase this? I'm using Puppeteer with Node and my image size is over 300 MB.
I had the same issue, fixed it by setting the cloud version to v1 in my now.json.
json
"features": {
"cloud": "v1"
},
It looks like v2 is set by default? cc @TooTallNate
same issue as @joemccann. puppeteer is huge. What is the reason for the 100mb limit in v2?
@devongovett without a limit, instant cold startup times become impossible at scale. Our focus is to provide a balance between flexibility and predictable performance / scale.
Most of our customers needs fall within these limits. Check out https://github.com/zeit/now-examples!
@rauchg I understand the reasoning behind the limit. I assume you store the images in RAM or on high performance (expensive) storage systems. But I just ran into it with what I would consider a "pretty darn normal" nextjs app. Only thing different than vanilla nextjs is a custom express server and typescript, on node/8-alpine. I'm not even bundling static media assets with that image and have them on a separate API.
â–² The built image size (100.1M) exceeds the 100MiB limit
Got it down from 125MB to 100.1MB but I can't remove anything more without it directly affecting the actual functionality :(
I know, I can just switch to cloud v1, but 100MB is really tight if you consider that one ends up with 300MB of node_modules without much effort these days, even if the resulting JS assets are ~1MB.
Is there any chance you reconsider the hardlimit? Please? :-)
Edit: is there any way to see current image sizes from the dashboard or CLI? If size is limited, size becomes an important metric.
We have a very massive Next.js app and it comes out to 70mb. Are you sure you set up your Dockerfile correctly? It's easy, for example, to ship your devDependencies when you don't have to :)
We are also working on trimming down Next.js because it ships a lot of unnecessary code (like webpack): https://github.com/zeit/next.js/issues/4496
You're completely right, I forgot about devDependencies.
RUN [...] &&\
rm -rf ~/.npm && npm prune --production
Having this as the last part of the build step saves about 1/3 on an uncompressed docker image in my case. Nice.
Yes. This is the main motivation behind having limits. In a lot of cases it allows us to guide the customers to include what they actually need, in a manner that we as a platform can guarantee is performant.
If we allowed you to deploy 2gb containers, in the short term you'd be happy, until you start hitting serious performance problems and bottlenecks. At which point, you'd most likely be unhappy our platform for not putting you on the correct path to long-term success.
edit: for better tone & messaging
@rauchg While you are right in an engineering sense, you may not be right in a business sense.
My app reads 84 MB on disk before I deploy, and I'm getting this error: The built image size (346.2M) exceeds the 100MiB limit
Do you know what might cause this disparity?
Such a change is similar to a change in API, i.e. breaking.
In order to not disrupt existing deployment workflows, the default could be v1 with the ability to change it to v2 with clear instructions as to how to transition prior to making such a breaking change to pre-existing (working) workflows.
If my interpretation is correct, now takes the compressed container size as metric for the limits. Your app size might be 84MB but including/within a Docker image after building it can grow significantly, depending on which base image you are using or what kind of routines are ran from your Dockerfile.
@genox thanks for the information.
100MB doesn't leave a lot of room for the project, especially when official docker-node is adding 200MB+ and other dockerfile options like alpine-node––according to the docs on github––will add 68MB (https://github.com/mhart/alpine-node)
It would be great to solve this problem for good as opposed to falling back to a less robust solution (cloud: v1).
What are others using that won't add significantly to the final file size?
I'm going to have to agree that 100MB is pretty tight. Are there any plans to increase that limit in the near future?
I'm having difficulty getting meteor apps below 100MB. Using a multi-stage build npm install --only=production and node:8-alpine I'm at 111MB from a meteor create --full test
I'm bumping up against this limit as well.
Perhaps my use case is unusual, but I wanted to try deploying my blog on the new Ghost 2.0 alpine image tonight. Using a Dockerfile of just FROM ghost:2.0.3-alpine this comes out to 115.8M failing to build on v2 but deploying successfully on v1.
FWIW this image uses FROM node:8-alpine per the recommendation above.
https://github.com/docker-library/ghost/blob/master/2/alpine/Dockerfile
Zeit's "cloud v2" might not be the right deployment choice for some stacks. I imagine that this kind of architecture is actually pointed more towards very lean services for now. Unless it becomes feasible for zeit to run 512mb containers (size vs performance vs infrastructure requirements). I personally look at it as a complementary option in addition to the regular, static way of scaling containers. And I highly doubt that "v1" will go away, just like that, without an option to run large containers. You don't _have_ to migrate to v2.
@genox thanks again, but it would be great to hear definitively from someone inside Zeit.
The breaking change would seem to indicate that people internally want to require customers to get their deployments under 100MB because as @rauchg says above:
This is the main motivation behind having limits :) They just make for better engineering.
But it has been pointed out, some frameworks cannot get under that even with an empty application.
@rauchg, because my business currently relies on Zeit––and I hope you can respond with some sense of urgency as I'm losing money every day I don't deploy new applications:
Is Zeit planning to sunset cloud: v1?
What framework/application stacks are being deployed successfully on v2?
@brianebill can you give us an example of what framework you are unable to deploy? That way we can assist you in solving your problem.
@rauchg apologies for not using the expected/reproducible error format, and my issue is officially resolved.
I applied the example uploaded a few days after the problem arose: https://github.com/zeit/now-examples/tree/master/node-meteor.
edit: actually .dockerignore was not needed
In my search, I also found https://github.com/moby/buildkit, which is a superior solution to the example. I haven't yet been able to integrate it into my workflow, but it promises to reduce docker deployments by ~20MB over alpine.
My deployment is now 51MB, but will be around 28MB using buildkit. Subsequent builds will take 1/10th of the time, which is the real gain.
Thanks for reaching out. @TooTallNate and others working on Spectrum have also been amazing.
@rauchg - An example is Prisma, which is now limited to using Cloud V1 as their image is around ~117MB :) Also see the reference made to this issue above, https://github.com/prisma/prisma/issues/2501
Their images on Dockerhub: https://hub.docker.com/r/prismagraphql/prisma/tags/
Is this for real that the size limit for free is now 5Mb ??!
Can't use "now" with any of my projects anymore 👎
@GhyslainBruno that limit sounds like the file size of a single asset uploaded; most likely large image or video files.
Hum... Maybe, but as I am using now-cli with docker images, I guess that it's quite the same as a single asset for the cli.
Did you use now-cli with docker images @jamo ?
@jamo thx for your response, you saved my day !
It's actually some file sizes (of build process) and not the Docker image sized which caused the issue (size > 5Mb).
Thank you again !
I'd really like to see the limit increased 200MB if possible. I've tried several times to set up an nginx and php image to deploy to now, but I can't get it below 150MB. Even using Alpine as a base, I just can't get it small enough. Nginx, php and all of the extensions I need on it for my app to run are easily over 100MB. At the moment, Now isn't a viable option for my php app because of the size limit.
@TimothyKrell I was in the same situation a few weeks back (@ >400MB), but have since prevailed (51MB). The size limit on feature.cloud.v2 affects Now's ability to wake all deployments, so if they increased it for a few edge cases, it would mean everyone's deployments would wake slower, as they do in feature.cloud.v1 when scale is set to 0.
First, you can set your cloud version in a now.json file per @Gomah suggestion above:
"features": {
"cloud": "v1"
},
That should solve your short-term issue. The longer-term issue is to understand and experiment with Docker. It's a powerful tool and getting better by leaps and bounds, but difficult to understand initially because it's deceivingly simple.
What is the size of your app without cache on your local machine? If it is 200MB, you can move your assets (like images and videos) to the cloud (like Amamzon S3) and link to them instead of including them in your local files. Docker does add to the size, but multi-stage build with alpine is only around 20MB. If your app is made up of text files, it will be pretty small.
@brianebill Thanks for the help. My app code is ~30MB. The real problem seems to be when I install my dependencies. That step adds ~85MB. I'm not sure how to get around that at the moment, because I'm pretty sure I need each of those dependencies.
This is the output of docker history:
IMAGE CREATED CREATED BY SIZE COMMENT
4f4e99e956b1 3 days ago /bin/sh -c #(nop) CMD ["/usr/bin/supervisor… 0B
a7fa34e189b3 3 days ago /bin/sh -c #(nop) COPY dir:fa6e437bc53b4fe34… 29.6MB
85d9710b2080 3 days ago /bin/sh -c #(nop) COPY file:396e1168ec8da742… 1.07kB
7eb026e1f59f 3 days ago /bin/sh -c #(nop) COPY file:9c3065bc9a91a70a… 351B
d3a4544b14fd 3 days ago /bin/sh -c #(nop) EXPOSE 80 0B
2a2977332519 3 days ago /bin/sh -c rm -rf /etc/localtime && ln -… 74.6kB
14792ad6e241 3 days ago /bin/sh -c sed -i "s|display_errors\s*=\s*Of… 73.9kB
ae7d0bd7255d 3 days ago /bin/sh -c sed -i "s|;listen.owner\s*=\s*nob… 23kB
14dadbd0c9c3 3 days ago /bin/sh -c apk add curl supervisor s… 85.3MB
d49c88d2d2af 3 days ago /bin/sh -c #(nop) ENV TIMEZONE=Africa/Johan… 0B
53e58c7bdec1 3 days ago /bin/sh -c #(nop) ENV PHP_CGI_FIX_PATHINFO=0 0B
743d89a7eef7 3 days ago /bin/sh -c #(nop) ENV PHP_ERROR_REPORTING=E… 0B
0ff1bbbd6dfb 3 days ago /bin/sh -c #(nop) ENV PHP_DISPLAY_STARTUP_E… 0B
3e495b4ba6d5 3 days ago /bin/sh -c #(nop) ENV PHP_DISPLAY_ERRORS=On 0B
5bdcbb4713ff 3 days ago /bin/sh -c #(nop) ENV PHP_MAX_POST=100M 0B
8fdf324dfb1e 3 days ago /bin/sh -c #(nop) ENV PHP_MAX_FILE_UPLOAD=2… 0B
c3c24f9d0cae 3 days ago /bin/sh -c #(nop) ENV PHP_MAX_UPLOAD=50M 0B
08243a86a93b 3 days ago /bin/sh -c #(nop) ENV PHP_MEMORY_LIMIT=512M 0B
7d11a4a2ec3d 3 days ago /bin/sh -c #(nop) ENV PHP_FPM_LISTEN_MODE=0… 0B
f15b8be47ac3 3 days ago /bin/sh -c #(nop) ENV PHP_FPM_GROUP=www 0B
498e6d0d402b 3 days ago /bin/sh -c #(nop) ENV PHP_FPM_USER=www 0B
03c91568f9e4 3 days ago /bin/sh -c apk update && apk add nginx … 2.75MB
de4761d9f037 6 days ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
<missing> 6 days ago /bin/sh -c #(nop) ADD file:eb8839f9a87a6922b… 4.43MB
@brianebill This is my Dockerfile. Building this without copying my app code over results in 86MB. There might be a way to get my app code smaller, but it's pretty tight. Do you know any way to get the size smaller when installing these packages?
FROM alpine:edge
RUN apk update \
&& apk add nginx \
&& adduser -D -u 1000 -g 'www' www \
&& mkdir /www \
&& chown -R www:www /var/lib/nginx \
&& chown -R www:www /www \
&& rm -rf /etc/nginx/nginx.conf
ENV PHP_FPM_USER="www"
ENV PHP_FPM_GROUP="www"
ENV PHP_FPM_LISTEN_MODE="0660"
ENV PHP_MEMORY_LIMIT="512M"
ENV PHP_MAX_UPLOAD="50M"
ENV PHP_MAX_FILE_UPLOAD="200"
ENV PHP_MAX_POST="100M"
ENV PHP_DISPLAY_ERRORS="On"
ENV PHP_DISPLAY_STARTUP_ERRORS="On"
ENV PHP_ERROR_REPORTING="E_COMPILE_ERROR\|E_RECOVERABLE_ERROR\|E_ERROR\|E_CORE_ERROR"
ENV PHP_CGI_FIX_PATHINFO=0
ENV TIMEZONE="Africa/Johannesburg"
RUN apk add curl \
ssmtp \
tzdata \
php5-fpm \
php5-mcrypt \
php5-openssl \
php5-json \
php5-dom \
php5-pdo \
php5-zip \
php5-mysql \
php5-gd \
php5-pdo_mysql \
php5-pdo_sqlite \
php5-xmlreader \
php5-xmlrpc \
php5-iconv \
php5-curl \
php5-ctype \
supervisor
@TimothyKrell There were a couple of things that helped reduce the size in my case:
aws-sdk is 30MB, for example, but I was only using it for S3. Yet, it's bigger than my entire application...COPY --from=0 /app/bundle/ /app/ This writes over the original src files and removes them from the container.The best way to see into the build process is to navigate to the right folder WORKDIR /app and RUN ls -l to list all the files in each folder so you can see if there's anything in there you can remove.
In my dockerfile, you'll see there are a lot of stepping in and out of folders to do things in a certain order:
FROM geoffreybooth/meteor-base:1.7.0.3
RUN mkdir /app && ls -l
COPY ./src/package*.json /app/
WORKDIR /app
RUN ls -l
RUN meteor npm install
RUN ls -l
WORKDIR ..
COPY src /app/
WORKDIR /app/
RUN meteor build --directory .
FROM mhart/alpine-node:8.11.4
COPY --from=0 /app/bundle/ /app/
WORKDIR /app/programs/server
RUN apk add --no-cache make g++ python \
&& rm -rf node_modules \
&& npm install --build-from-source
WORKDIR ./npm
RUN npm install bcrypt --build-from-source
WORKDIR ../../..
RUN apk del make g++ python \
&& ls -l
ENV PORT=3000
EXPOSE 3000
CMD** ["node", "main.js"]
@brianebill Thanks! That was helpful. I discovered that supervisor ended up increasing the size by over 30MB because it needs to install python. I discovered this section in the docs saying that supervisor is redundant due to how Now works. That should get me within the MB budget.
@TimothyKrell glad I could help you out
This limit is definitely a big challenge for many of my projects. It's already been reported by one of my users here: https://github.com/simonw/datasette/issues/366
The entire idea behind Datasette is to bundle a read-only SQLite database with the application that serves it, so the 100MB limit here dramatically reduces the utility of Datasette on Now. I'll be forced to stick with v1 for as long as possible for most Datasette deployments.
I actually just hit this limit with another potential use-case I'm exploring for Now: deploying machine learning models as an API. I'm playing around with computer vision models trained on top of the resnet34 computer vision architecture, but that architecture itself is already ~80MB - once you train additional layers it gets to 85MB, and I don't think I can fit the Python code used to evaluate the model in the remaining space. Here's the code I'm exploring for this: https://github.com/simonw/cougar-or-not
Since both of my use-cases are read-only, I'd love to be able to mount some kind of block storage (like EBS but read-only) to my containers and run SQLite and my machine learning stuff on top of that.
I notice that AWS Lambda doesn't support this either, so presumably it's not a trivial thing to implement. Read-only block storage that I can treat as a read-only section of the filesystem would be crazy-useful though.
I have to agree here, this limit its a challenge. My App is 75MB on disk and with a multi-stage build it ends up at 110MB. I may be able to remove some assets, but there is not room to grow. Is there any plan to reconsider this limit?
I'm running into this issue as well when deploying meteor apps which forces me to use sfo region while all of my users are in NYC. Will iad region ever be opened to v1 deployments?
In case this is useful for anyone: here's a Dockerfile I have used to successfully deploy a Python 3 application (using Starlette and uvicorn, but it should work for other frameworks if you replace the pip install lines) in under 100MB: https://gist.github.com/simonw/0ea285e3347b1d06ec5abc1391887739
app.py examples can be found in the Starlette documentation: https://www.starlette.io
Fitting python apps in a 100MB Docker image is tricky, so I figure the more examples the better.
On our project, switching to a multi stage Docker build reduced our built image size from 141MiB to 39MiB
https://docs.docker.com/develop/develop-images/multistage-build/
@joemccann @devongovett Please see this blog post for Puppeteer running on Now 2.0: https://zeit.co/blog/serverless-chrome
I got it working, but we had to ramp up the memory and timeout waaaay up. 256 and 1 min 45 seconds.
I got it working, but we had to ramp up the memory and timeout waaaay up. 256 and 1 min 45 seconds.
Most helpful comment
@TooTallNate Is there no way to increase this? I'm using Puppeteer with Node and my image size is over 300 MB.