For some of my services, docker-compose has randomly started taking a really long time (2-3 minutes) for docker-compose
build to even start doing anything. It prints out e.g. Building nginx...
and then it just sits there for 2-3 minutes before it starts doing anything, and that happens for a fully cached build with a build context of just a few kilobyte. Using docker build
directly doesn't suffer from this issue.
Demonstration of slow problematic docker-compose build:
root@Ubuntu-1404-trusty-64-minimal ~/.docker-services/nginx # docker-compose --verbose build
Compose version 1.2.0
Docker base_url: http+unix://var/run/docker.sock
Docker version: KernelVersion=3.13.0-51-generic, Arch=amd64, ApiVersion=1.18, Version=1.6.0, GitCommit=4749651, Os=linux, GoVersion=go1.4.2
Building nginx...
docker build <- (u'/root/.docker-services/nginx', rm=True, tag=u'nginx_nginx', nocache=False, stream=True)
<<< HANGS HERE FOR 2-3 MINUTES !! what does it do??? >>>
docker build -> <generator object _stream_helper at 0x7fce43f4bfa0>
Step 0 : FROM ubuntu
---> 07f8e8c5e660
Step 1 : RUN apt-get update && apt-get upgrade -y
---> Using cache
---> 670f5fc73c72
Step 2 : RUN apt-get install -y nginx python python3
---> Using cache
---> 27d642e6c29f
Step 3 : RUN apt-get install -y vsftpd
---> Using cache
---> 15a449a8b2cd
Step 4 : RUN mkdir -p /var/run/vsftpd/empty
---> Using cache
---> ee5f69865c5a
Step 5 : RUN useradd -d /home/www2/<DOMAIN RETRACTED>/ -s /bin/bash <USER RETRACTED>
---> Using cache
---> 29abf98f5dbd
Step 6 : RUN echo "<USER RETRACTED>:<PASSWORD RETRACTED>" | chpasswd
---> Using cache
---> 7fa31b054c82
Step 7 : RUN echo "#!/bin/sh" > /startvsftpd.sh
---> Using cache
---> c177cd71013f
Step 8 : RUN echo "nohup vsftpd > /dev/null &" >> /startvsftpd.sh
---> Using cache
---> 6214c1d9dc43
Step 9 : RUN chmod +x /startvsftpd.sh
---> Using cache
---> 4650414f1b25
Step 10 : RUN rm -f /etc/vsftpd.conf
---> Using cache
---> 598a56d7089b
Step 11 : RUN useradd ftpsecure
---> Using cache
---> 62db8844aba9
Step 12 : RUN usermod -u 1011 www-data
---> Using cache
---> 70e3051ed995
Step 13 : RUN mkdir -p /home/www/
---> Using cache
---> b2c8c5999dd1
Step 14 : RUN chown 1011:1011 /home/www
---> Using cache
---> 05d59cdfb346
Step 15 : VOLUME /home/www
---> Using cache
---> 48a0dca633c2
Step 16 : CMD chown root /home/www2/<DOMAIN RETRACTED> && chmod -R 555 /home/www2/<DOMAIN RETRACTED>/ && chown -R <USER RETRACTED> /home/www2/<DOMAIN RETRACTED>/www/ && chmod -R 755 /home/www2/<DOMAIN RETRACTED>/www/ && ./startvsftpd.sh && nginx -g "daemon off;"
---> Using cache
---> a66f93e71af5
Successfully built a66f93e71af5
root@Ubuntu-1404-trusty-64-minimal ~/.docker-services/nginx #
... and this finishes in like 2 seconds:
root@Ubuntu-1404-trusty-64-minimal ~/.docker-services/nginx # docker build -t my_nginx_test .
It seems to be somewhat random which services it chooses to affect, and it started to happen for one of my services that had quick builds before out of nothing from one day to the next. I didn't do any docker-compose or docker upgrade I'm aware of at that specific day.
All my other services that are similarly constructed to this one don't suffer from this problem. However, it now happens 100% of that time for this specific service since it started happening, for whatever reason.
I have encountered this on both a Fedora and an Ubuntu machine, so I would assume it's not something highly specific to the machines in question. Both machines use the official docker.com packages (not distribution-provided).
Details for my affected Ubuntu host:
docker-compose: 1.2.0
Docker version 1.6.0, build 4749651
uname -a: Linux Ubuntu-1404-trusty-64-minimal 3.13.0-51-generic #84-Ubuntu SMP Wed Apr 15 12:08:34 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
``` nginx:
build: .
tty: true
volumes:
The Dockerfile does nothing special and since this also happens when the entire build is cached (see output above) it shouldn't matter anyway, right? You can glimpse at the commands in the command line output above if it's truly relevant.
Since I also encounter this on my production machine, this bug is also affecting the downtime I have for a rebuild & restart cycle. (since docker-compose wants to rebuild with the same name as far as I'm aware, and so I assume I always need to shut down the old container first? At least that's what I'm doing now)
See my response here: https://github.com/docker/compose/issues/2090#issuecomment-142953083 it's the same issue
Did you read my description? This is _not_ a build context size issue. I have a .dockerignore, and as you can read above, using docker build
directly _for the very same image_ is blazingly fast. It's only docker-compose build
which thinks it's ok to waste 2-3 minutes doing nothing.
Or is there something about docker-compose build
ignoring .dockerignore files I have missed?
Edit: stupid me, dpnephin guessed correctly: although the dockerignore was there, a docker-compose bug seems to make it ignore the contents... see comments below
I did read it, I still think it's likely a .dockerignore
issue because that's what the build is doing during that phase.
Are you able to reproduce the issue if you create a fresh checkout and rm
all the files that should be ignored? You could also test it by adding a ADD . /allfiles
to the Dockerfile
and verifying if the files that are ignored are present or not.
We recently fixed some issue with the docker-py
implementation of .dockerignore
, but those fixes aren't in a release yet. I think it's possible the bugs we fixed could be causing the issue.
Yup, the issue indeed goes away instantly if I just delete the subfolders that are covered by the .dockerignore
.
Still, this is _not_ an issue of me forgetting to put this into .dockerignore
, since docker build -t somename .
in the same folder runs _very quickly_ (and I did write a .dockerignore
for exactly that).
On Fedora, I have docker-compose 1.4.0 (Ubuntu: 1.2.0) which suffers from the same problem.
Edit: sorry, the label remark is of course stupid. I apologize for that.. I don't know how you use your label system internally. Also maybe there's something dumb I've done on my side which I don't see right now.. is there some sort of switch or condition to make docker-compose ignore a .dockerignore which I might have used accidentally?
Oh right I suppose I misread your remark anyway.. so what you are saying that if I use recent development code version of docker-compose it might go away? Let me try..
Ok I just tested it, I can confirm this goes away with docker-compose's latest git master (as of now at commit dabf1e8657674014a5bc89f99edbf2fe0629bb71) /1.5.0dev.
Thanks for your quick response, I guess I'll just have to wait for the next release or temporarily use the development version then.
Thanks for confirming, @JonasT. I think this issue can be closed?
Yup, thanks.
Sadly, I still seem to observe this on another machine for some reason. docker-compose build uploads the entire 17gb context despite a .dockerignore even in the 1.5.0dev version, while docker build does it correctly. :(
This is on debian 8.2 with kernel 3.16.0 with lxc-docker version 1.7.1, build 786b29d
docker-compose is git master, HEAD is at dabf1e8657674014a5bc89f99edbf2fe0629bb71
Please provide a copy of your .dockerignore
(and maybe a tree -L 3
) to help debug this issue.
.dockerignore:
exim4
apache_2_conf
plone_rw
www_rw
usr_lib_cgi-bin_mailman
var_lib_mailman
tree -L 2 (with some webhosts redacted):
.
โโโ apache_2_conf
โย ย โโโ apache2.conf
โย ย โโโ apache2.conf.dpkg-old
โย ย โโโ apache2.conf.old
โย ย โโโ conf-available
โย ย โโโ conf.d
โย ย โโโ conf-enabled
โย ย โโโ envvars
โย ย โโโ file
โย ย โโโ magic
โย ย โโโ mods-available
โย ย โโโ mods-enabled
โย ย โโโ ports.conf
โย ย โโโ ports.conf.old
โย ย โโโ sites-available
โย ย โโโ sites-enabled
โย ย โโโ ssl
โโโ docker-compose.yml
โโโ Dockerfile
โโโ exim4
โย ย โโโ conf.d
โย ย โโโ exim4.conf.template
โย ย โโโ exim4.conf.template.dpkg-dist
โย ย โโโ passwd.client
โย ย โโโ update-exim4.conf.conf
โโโ plone_authnz.py
โโโ shell.sh
โโโ usr_lib_cgi-bin_mailman
โย ย โโโ admin
โย ย โโโ admindb
โย ย โโโ confirm
โย ย โโโ create
โย ย โโโ edithtml
โย ย โโโ listinfo
โย ย โโโ options
โย ย โโโ private
โย ย โโโ rmlist
โย ย โโโ roster
โย ย โโโ subscribe
โโโ var_lib_mailman
โย ย โโโ archives
โย ย โโโ bin -> /usr/lib/mailman/bin
โย ย โโโ cgi-bin -> /usr/lib/cgi-bin/mailman
โย ย โโโ cron -> /usr/lib/mailman/cron
โย ย โโโ data
โย ย โโโ icons -> /usr/share/images/mailman
โย ย โโโ lists
โย ย โโโ locks -> /var/lock/mailman
โย ย โโโ logs -> /var/log/mailman
โย ย โโโ mail -> /usr/lib/mailman/mail
โย ย โโโ Mailman -> /usr/lib/mailman/Mailman
โย ย โโโ messages
โย ย โโโ qfiles
โย ย โโโ scripts -> /usr/lib/mailman/scripts
โย ย โโโ spam
โย ย โโโ templates -> /etc/mailman
โโโ www_rw
โโโ calendar
โโโ drupal_portierung
โโโ etherpad
โโโ <WEBHOST REDACTED>
โโโ <WEBHOST REDACTED>
โโโ htdocs
โโโ html
โโโ <WEBHOST REDACTED>
โโโ mailman -> /usr/lib/cgi-bin/mailman
โโโ munin -> /var/cache/munin/www/
โโโ <WEBHOST REDACTED>
โโโ <WEBHOST REDACTED>
โโโ <WEBHOST REDACTED>
โโโ <WEBHOST REDACTED>
โโโ <WEBHOST REDACTED>
โโโ <WEBHOST REDACTED>
โโโ <WEBHOST REDACTED>
โโโ webalizer
36 directories, 39 files
docker-compose build
in that directory hangs for many minutes before doing something. (the directory contents of www_rw exceed 10GB)
docker build -t blubb .
runs in like less than 1 second and tells me it transferred less than 1mb as a build context.
docker-compose is git master, HEAD is at dabf1e8657674014a5bc89f99edbf2fe0629bb71
@dnephin is this for the coming release?
I'm not sure what's going on here, needs more investigation. Most likely an issue with the .dockerignore
implementation in docker-py
.
Edit: removing remark: nvm, forgot to install the docker-compose dev version on this machine. * facepalm *
Ok, since this sounds like a docker-py issue with .dockerignore
, let's track this in #1607
Most helpful comment
See my response here: https://github.com/docker/compose/issues/2090#issuecomment-142953083 it's the same issue