I still haven't figured out docker networking.
The problem:
Servers I start within the container are not reachable from the host. Curl on the host respond with "curl: (52) Empty reply from server". Note that this is different behavior than trying to connect to a port that hasn't been forwarded, which would result in "curl: (56) Recv failure: Connection reset by peer". I guess since the nc tests work, I can actually send from the outside, but the container cannot seem to send. It really looks like a firewall problem, but ufw is disabled on host and in the container it's not even installed.
Any ideas?
Edit: some example
docker ps shows
127.0.0.1:7000->5984/tcp
on my container. Inside the container I can do:
curl localhost:5984
{"couchdb":"Welcome","uuid":"0248a6778bcfe29b2376b8c137efcb56","version":"1.4.0","vendor":{"version":"1.4.0","name":"The Apache Software Foundation"}}
Everything fine there. But from the host:
root@ubuntu:/home/muellermichel# curl -v http://localhost:7000
* About to connect() to localhost port 7000 (#0)
* Trying 127.0.0.1... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:7000
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
* Closing connection #0
Ok I'm an idiot. It was just a misconfiguration of the server to only listen on localhost. I simply never thought of it before because I always proxied with ssh -L to my servers.
I had exactly the same issue and I was already banging my head. You just gave me the right hint to solve it. Thanks man!
Glad I could help. Funny thing is, I don't even remember anymore what setting I was talking about here, it's a shame I didn't describe it better.
It was good enough for me :)
Thanks! I was getting crazy with the same issue...
For people playing with GO AppEngine inside container, specify the host when launching your server instance:
goapp serve -host=0.0.0.0 myapp/
If you don't specify -host=0.0.0.0 .you will not be able to query your server from outside the container, even with a good port mapping configuration at Docker level.
+1. Got me too. :laughing:
@muellermichel @reiz @ameuret @dbenque could you please try to elaborate on the setting you are talking about? Running into the same problem but I can't figure it out.
@ianzapolsky It's basically the same when you want to make couchdb accessible from outside your host, no matter whether it runs in a container or not: you need to bind it's address to 0.0.0.0:
[httpd]
bind_address = 0.0.0.0
cheers @muellermichel, thanks a lot. This solved my problem. For anyone stumbling upon this thread in the future, my specific issue was this: I am running a gunicorn server serving a Django app inside a docker container, exposing it to the host on port 8000 via EXPOSE 8000
in the Dockerfile and the command line argument -p 127.0.0.1:8000:8000
in the docker run command. Like the guys above, I was curling port 8000 from the host and getting an empty reply from server
. The simple fix is to bind the address that gunicorn serves on to 0.0.0.0. So all I had to do was add -b 0.0.0.0:8000
to the gunicorn run command and everything worked beautifully.
@ianzapolsky Thanks for your comment. It helped me realise that the issue with empty reply from server
was not the docker setup, but the actual application. In my case was trying to run sample Spray app but was getting this error. All I needed to do is to change localhost
to 0.0.0.0
at this line https://github.com/spray/spray-template/blob/on_spray-can_1.3_scala-2.11/src/main/scala/com/example/Boot.scala#L20
For anyone who runs into this in future.
@KirillGrishin Thanks for your valuable comment. My problem is also solved :).
but didn't got answer why we need to change localhost to 0.0.0.0?
why we need to change localhost to 0.0.0.0?
Basically, that tells the server to listen on all networks, not just to localhost connections, otherwise the server is only accessible from within the container (which is the local host for the server running there)
Thank You @thaJeztah
That's really helpful!
Encountered the same issue building my own ElasticSearch container. Glad to see I'm not the only one.
thanks, i had the same bind to localhost issue
I love you all. the same issue trouble me a long day.
It happened to me. The problem were two hosts with SSH forwarding to the same port on localhost. I was connecting to other host where jupyter was not running. :man_facepalming:
i had the issue with the same ... even i had bind ip with 0.0.0.0:port please help me in that
Had the same issue with a python flask app doing app.run on "localhost". Fixed by changing it to "0.0.0.0"
@muellermichel Saved me a headache as well - thank you ! Glad to see we're not alone as idiots. : )
I had this error but it was due to my corporate proxy server. For some reason my VM was using the proxy server to resolve localhost?
Anyway, I ran curl like this to check my container flask app:
curl --noproxy 127.0.0.1 http://127.0.0.1:5000
I also have the same problem, My nodejs server is running in a docker container, it has exposed some REST endpoints, but when I make a curl request to those REST endpoints, I got below error
curl http://127.0.0.1:3001/ping
curl: (52) Empty reply from server
Solution:-
Change localhost to 0.0.0.0 in below line
host: process.env.HOST || 'localhost' => host: process.env.HOST || '0.0.0.0'
const application = require('./dist');
module.exports = application;
if (require.main === module) {
// Run the application
const config = {
rest: {
port: +process.env.PORT || 3000,
host: process.env.HOST || '0.0.0.0',
openApiSpec: {
// useful when used with OASGraph to locate your application
setServersFromRequest: true,
},
},
};
application.main(config).catch(err => {
console.error('Cannot start the application.', err);
process.exit(1);
});
}
I'm a bit lucky that I've only been banging my head for an hour or so before I found this. Of course, the app needed to be bound to 0.0.0.0 and not 127.0.0.1. I'd already solved this problem before but the configurations on this project were not done by me. This thread gave me the answer I needed to not go insane. :D
I did the same thing. I'm running a Flask app and all I had to do was add host='0.0.0.0' to the app.run() as a parameter for it to work properly again.
I got the same error I had to change the configuration file in the project and rebuild the image.
Before that when not in project nodejs
host: process.env.HOST || '0.0.0.0',
port: process.env.POST || 8600
After having it my application worked
I'm a bit lucky that I've only been banging my head for an hour or so before I found this. Of course, the app needed to be bound to 0.0.0.0 and not 127.0.0.1. I'd already solved this problem before but the configurations on this project were not done by me. This thread gave me the answer I needed to not go insane. :D
exactly the same situation here lol
Same problem faced and fixed. Thanks to @muellermichel. Listening on 0.0.0.0
instead of localhost
solved it for me.
Thank you very much!!!!
The same for Spring Boot applications, you need to change the property server.address
to server.address=0.0.0.0
This helped me today as I was trying to spin up go app server in a docker container. I have to change fromlocalhost
to 0.0.0.0
so that it can listen to all the networks. 馃憤
So the point is making elasticsearch server listen to
0.0.0.0'? Did I get it right? If so, I guess I should change elasticsearch configuration yml file and set network.host = '0.0.0.0'?
I got this issue for the title but my problem wasn't totally related to that.
I had a scenario which I have two apps, a server and a client API, using Flask (python).
My Flask config file wasn't getting the correct port that I set on the terminal.
export APP_PORT=5002; docker-compose up -d
As I'm using docker-compose, I need to set this variable on the environment
's service section:
version: "3"
services:
web:
build: "./py-client"
ports:
- "5002:${APP_PORT}"
environment:
- APP_PORT=${APP_PORT}
api:
build: "./py-api"
ports:
- "5000:5000"
Most helpful comment
Ok I'm an idiot. It was just a misconfiguration of the server to only listen on localhost. I simply never thought of it before because I always proxied with ssh -L to my servers.