I am trying to run docker-compose --verbose -f docker-compose-prod.yml up -d --build
from my Mac to AWS EC2, but it keeps getting stuck on docker.api.build._set_auth_headers: Sending auth config (u'auths')
. I do have a very large file inside. Any thoughts?
Either be patient while your very large file gets archived then uploaded to EC2, or add it to your .dockerignore
.
Unfortunately my patience is longer than the time out message or just the sudden end of the task that I experience instead. Is there a way to see the traffic if the data is actually being uploaded? Just not sure if it鈥檚 waiting for a response from EC2 and isn鈥檛 receiving it?
Unfortunately we don't currently have progress output for context uploads. Do you see any error message or anything when the process exits?
I can't remember, some sort of timeout message occasionally. I've noticed that when I abort the process myself, it cancels on ^CERROR: compose.cli.main.main: Aborting.
so the upload is taking place. Turns out if I use netted
I can see a docker-compose.3244
connection established and data is being uploaded. But it is very slow, only a few KBs a second. Any thoughts on how can I diagnose if its docker, aws or just my internet connection?
Here is what I mean, got this error for example ERROR: compose.cli.errors.handle_connection_errors: SSL error: ('The write operation timed out',)
. Interestingly, this happened on t2.large instance, the upload speed is super low on here. If I used t2.xlarge the speed is much higher, but still around 3Mbs every 5 seconds say. I guess I need to figure out where the bottleneck is...
Closing the issue - I've found that zipping the file with the model reduced the its size from 5GB to 40mb. This allowed me to complete docker-compose
as needed. It works perfectly fine now.
For me, it was that image name and container name had uppercase characters...
Making them lowercase solved the problem...
@drastorguev I think I'm running into similar problems; do you mind explaining how you fixed it with the zipping?
Sure, if I remember correctly, zipping just compressed the file so it didn't take so long to transfer the file across using Docker. I think I just used Gzip for zipping the file. Hope this helps!
It's far better to add those files to a repository and add a RUN command to download and install them into your directory at build time. This will prevent hangs on docker-compose build.
I also had this problem and turns out that I had a directory with gigabytes of content. After adding the dir to .dockerignore
the problem got solved.
Sorry for continuing this discussion of a (closed) bugtracking issue, but it's very much relevant to my current situation, so if you can bare with me I really would appreciate if we could continue the discussion just a bit longer:
I'm trying to wrap my head around what docker-compose
does here, because the test I've done doesn't make sense to me:
.docker/config.json
docker-compose.yml
file with a relative context ./
docker-compose.yml
is located in contains a couple of files and folders, of which a few files are really largeRunning docker-compose build
takes a really long time, and I have concluded since reading this thread that it probably is because of those huge files that I have in my folder structure. I've ran docker-compose --verbose build
and the title of this issue fits very well with my situation.
But, what I don't understand is this part stated in https://docs.docker.com/compose/compose-file/#context This directory is also the build context that is sent to the Docker daemon
I wonder _which_ docker daemon? I have one locally installed and it's working fine. But am I to understand that the context folder content is sent to _all_ authenticated docker daemons that are included in .docker/config.json
?
And how is it being sent? I note that @shin- states "...your very large file gets archived then uploaded to EC2...", but is it sent encrypted or in clear text?
_Couldn't this be a security concern if the contents of the context folder contains sensitive data?_
I can't determine if docker-compose build
successfully connects to any of the registries. I don't see anything in e.g. netstat
that confirms/denies it. I even did a sledge hammer trial of disconnecting internet from my computer and nothing appears different; docker-compose --verbose build
does its thing and doesn't say that connection to any of the registries failed or succeeded. The whole procedure seem to be the exact same no matter if there's internet access or not.
It's very difficult to determine if data is being sent over the wire or not, the logs don't state anything about them other than the "Sending auth config ([list of services])" which doesn't say much of what's actually happening.
I've been trying to read and search for the rationale of this behaviour, as I personally didn't expect that because I added credentials to a registry, it would send all the data in the context library to it. Or am I misunderstanding something very obvious here?
For me, it was that image name and container name had uppercase characters...
Making them lowercase solved the problem...
Thanks! Mine was similar, I had "/" at the beginning of the image name. (Because my env variable defaulted to empty string).
Either be patient while your very large file gets archived then uploaded to EC2, or add it to your
.dockerignore
.
This was it. Just update your .dockerignore, it is probably some large file getting in the way. Wish there was more logging about that!
For me, it was that image name and container name had uppercase characters...
Making them lowercase solved the problem...
In my case, this was the problem. Is there a sort of initial parser/validator highlighting errors like this? In my case, the output of docker-compose --verbose build
did not help at all, since it always got stuck to docker.api.build._set_auth_headers: Sending auth config ()
Most helpful comment
For me, it was that image name and container name had uppercase characters...
Making them lowercase solved the problem...