Compose: Nginx "No route to host" error

Created on 7 Sep 2015  Â·  17Comments  Â·  Source: docker/compose

Hi! I have a web container where I run Nginx with upstream set to server app:8080;. app is a container running unicorn. When I up everything at the start it's all working fine but if I issue a docker-compose restart app then Nginx stops being able to route to the upstream server with the error:

connect() failed (113: No route to host) while connecting to upstream

I have to also restart the web container for it to start working again. I checked the hosts file but the IP of the app container remains the same after restarting the web container.

I'm I missing some detail about how docker-compose works?

My docker-composer-yml:

dbdata:
  image: postgres:9.4.4

db:
  image: postgres:9.4.4
  volumes_from:
    - dbdata
  env_file: .env

app:
  build: .
  links:
    - db
  volumes:
    - .:/app

web:
  image: nginx:1.9
  ports:
    - '3000:80'
    - '3443:443'
  links:
    - app
  volumes:
    - ./nginx.conf:/etc/nginx/nginx.conf
  volumes_from:
    - app
kinquestion

Most helpful comment

What does docker-compose ps give you, does it say that the container is running or exited? Also, try docker-compose logs <container_id> to see if something goes wrong with either Nginx or your actual application.
As far as I can tell, you're starting your containers in daemon mode, so you don't have access to the logs. I would recommend you perform the following steps to see if anything goes wrong while your application is restarting:

  • Start the containers in daemon mode: docker-compose up -d
  • Attach to logs for both containers : docker-compose logs (in the directory where you have docker-compose.yml
  • Restart your application and see what the logs tell you.

All 17 comments

You have a app service but there is no such host name. Host name will look something like projectX_app_1 where 'projectX' is the name of the current dir (unless overridden). Last number will change depends on number of app containers you run

The app hostname is actually in the /etc/hosts file together with all the other ones:

172.17.0.6  app 22cd38d6e1af dockertest_app_1
172.17.0.6  app_1 22cd38d6e1af dockertest_app_1

But I tried using the full dockertest_app_1 anyway in the nginx.conf and the result is the same. Still need to restart the web container. I would expect it to just keep working fine once the app container comes back up.

Is it possible that app is not ready when Nginx is trying to access it?

It is possible if the application takes longer to start.

"What type of application are you trying to run?"

  • Went over the "unicorn part", sorry.

Could you add your nginx.conf here?

It's a rails app, using unicorn. I wait until well over both the container and app had time to boot, checking the logs, and then retry a few more times but it doesn't recover.

nginx.conf:

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid       /var/run/nginx.pid;

events {
  worker_connections 1024;
}

http {
  include       /etc/nginx/mime.types;
  default_type  application/octet-stream;

  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /var/log/nginx/access.log  main;

  sendfile    on;
  tcp_nopush  on;
  tcp_nodelay off;

  keepalive_timeout 65;

  gzip on;
  gzip_http_version 1.0;
  gzip_proxied any;
  gzip_min_length 500;
  gzip_disable "MSIE [1-6]\.";
  gzip_types text/plain text/html text/xml text/css
             text/comma-separated-values
             text/javascript application/x-javascript
             application/atom+xml;

  upstream app_server {
    server app:8080;
  }

  server {
      listen      80;
      server_name _;

      root /app/public;

      client_max_body_size 4G;

      try_files /app/public/maintenance.html $uri/index.html $uri.html $uri @app;

      location @app {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $http_host;

        proxy_redirect off;

        proxy_pass http://app_server;
      }

      location ~ ^/assets/ { 
        try_files $uri @app;
        expires 1y;
        add_header Cache-Control public;
        add_header ETag "";
      }   

      error_page 500 502 503 504 /500.html;

      location = /500.html {
        root /app/public;
      }
  }
}

Also my Dockerfile and unicorn conf for replication if needed:

FROM ruby:2.2

RUN apt-get update -qq && apt-get install -yqq build-essential libpq-dev postgresql-client && rm -rf /var/lib/apt/lists/*
RUN wget -qO- https://deb.nodesource.com/setup_0.12 | bash - 2>&1 > /dev/null && apt-get -yqq install nodejs && rm -rf /var/lib/apt/lists/*

RUN mkdir /app
WORKDIR /app

COPY Gemfile* ./
RUN bundle install

EXPOSE 8080
CMD ["unicorn", "-c", "unicorn.rb"]
worker_processes Integer(ENV["UNICORN_WORKERS"] || 2)
timeout 15
preload_app false

before_fork do |server, worker|
  defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
end

after_fork do |server, worker|
  defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
end

What does docker-compose ps give you, does it say that the container is running or exited? Also, try docker-compose logs <container_id> to see if something goes wrong with either Nginx or your actual application.
As far as I can tell, you're starting your containers in daemon mode, so you don't have access to the logs. I would recommend you perform the following steps to see if anything goes wrong while your application is restarting:

  • Start the containers in daemon mode: docker-compose up -d
  • Attach to logs for both containers : docker-compose logs (in the directory where you have docker-compose.yml
  • Restart your application and see what the logs tell you.

Nevermind. I figured it out: I thought docker-compose would keep the same IP between runs of the same container but it changes with each restart causing the nginx.conf to become stale. I can fix it by sending a SIGHUP to nginx which will make it reload the conf file and read the updated IP from the hosts file.

I suppose there isn't a way to tell docker or compose to keep the same IP when restarting a container?

You can use the hostname that is placed in /etc/hosts instead of using the ip. There is no way to keep a stable IP, that is correct.

That is why I asked you to use the logs and inspect your containers. You can use the "hostname" property in your docker-compose.yml and assign a value to it for your application and in the nginx conf use that hostname instead. Note that in production, if they are going to be on separate boxes, you will still need to have the ip in the configuration.

Best,
Adrian.

On 09 Sep 2015, at 9:08 pm, bsantos [email protected] wrote:

Nevermind. I figured it out: I thought docker-compose would keep the same IP between runs of the same container but it changes with each restart causing the nginx.conf to become stale. I can fix it by sending a SIGHUP to nginx which will make it reload the conf file and read the updated IP from the hosts file.

I suppose there isn't a way to tell docker or compose to keep the same IP when restarting a container?

—
Reply to this email directly or view it on GitHub.

I am using the hostname in the nginx conf. But when nginx starts, it loads the conf, resolves the hostname and caches the IP in memory. Later the /etc/hosts file changes, but nginx just keeps using the same cached conf, therefore routing requests to the outdated IP.

It's an unfortunate nginx issue rather than a docker one :wink:

I find it rather unusual. I use a similar setup at work, with Nginx and other application containers and if I restart any of the containers, Nginx doesn't care about it. When the container is up, Nginx is able to serve whatever that container exposes. I'll get back with more details tomorrow.
P.S. Do you know about Consul? You also need to check out consul-template.

https://github.com/jwilder/nginx-proxy works for me quite well

Glad you found something that works.

@opreaadrian Yeah, I find it odd too. If docker exec into the container and test the hostname with the hostcommand it resolves fine. It's just nginx that doesn't get it. Maybe there's a config parameter that I'm missing.

I fear Consul or nginx-proxy (@thaJeztah) would put the complexity over the limit that I have in mind for this stack. It's more of a starter-project friendly stack.

nginx only resolves hostnames on startup. You can use variables with proxy_pass to get it to use the resolver for runtime lookups.

See:

It's quite annoying.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

dimsav picture dimsav  Â·  3Comments

saulshanabrook picture saulshanabrook  Â·  3Comments

davidbarratt picture davidbarratt  Â·  3Comments

29e7e280-0d1c-4bba-98fe-f7cd3ca7500a picture 29e7e280-0d1c-4bba-98fe-f7cd3ca7500a  Â·  3Comments

CrimsonGlory picture CrimsonGlory  Â·  3Comments