Moby: docker and ufw serious problems

Created on 18 Mar 2014  ·  137Comments  ·  Source: moby/moby

Having installed ufw and blocking all incoming traffic by default (sudo ufw default deny) by running docker images that map the ports to my host machine, these mapped docker ports are accessible from outside, even though they are never allowed to be accessed.

Please note that on this machine DEFAULT_FORWARD_POLICY="ACCEPT" as described on this page http://docs.docker.io/en/latest/installation/ubuntulinux/#ufw has not been enabled and the property DEFAULT_FORWARD_POLICY="DROP" is still set.

Any ideas what might causing this?

Output of ufw status:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing)
New profiles: skip

To                         Action      From
--                         ------      ----
22                         ALLOW IN    Anywhere
443/tcp                    ALLOW IN    Anywhere
80/tcp                     ALLOW IN    Anywhere
5666                       ALLOW IN    95.xx.xx.xx
4949                       ALLOW IN    95.xx.xx.xx
22                         ALLOW IN    Anywhere (v6)
443/tcp                    ALLOW IN    Anywhere (v6)
80/tcp                     ALLOW IN    Anywhere (v6)

Here is the output of my rabbitmq via docker ps:

cf4028680530        188.xxx.xx.xx:5000/rabbitmq:latest           /bin/sh -c /usr/bin/   5 weeks ago         Up 5 days           0.0.0.0:15672->15672/tcp, 0.0.0.0:5672->5672/tcp   ecstatic_darwin/rabbitmq,focused_torvalds/rabbitmq,rabbitmq,sharp_bohr/rabbitmq,trusting_pike/rabbitm

Nmap test:

nmap -P0 example.com -p 15672

Starting Nmap 5.21 ( http://nmap.org ) at 2014-03-18 11:27 CET
Nmap scan report for example.com (188.xxx.xxx.xxx)
Host is up (0.048s latency).
PORT      STATE SERVICE
15672/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds

General infos:

  • Ubuntu 12.04 server
$ uname -a
Linux production 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

$ docker version
Client version: 0.9.0
Go version (client): go1.2.1
Git commit (client): 2b3fdf2
Server version: 0.9.0
Git commit (server): 2b3fdf2
Go version (server): go1.2.1
Last stable version: 0.9.0

$ docker info
Containers: 12
Images: 315
Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Dirs: 339
WARNING: No swap limit support
arenetworking

Most helpful comment

Guys, this is a serious security issue. Why is there no hint in the documentation for it? Only by accident I found out, that my MySQL Port is wide open to the world. I absolutely didn't expect that as I've used ufw before and it was reliable enough to not spend another thought on it. So I trusted the advice to change the forward policy to ACCEPT. I would never have expected that it basically completely suspends ufw.

All 137 comments

Ufw is only setting things in the filter table. Basically, the docker traffic is diverted before and goes through the nat table, so ufw in this case is basically useless, if you want to drop the traffic for a container you need to add rules in the mangle/nat table.

http://cesarti.files.wordpress.com/2012/02/iptables.gif

@Soulou would you recommend adding to mangle or nat?

_edited previous comment after some research_

In my case I wanted to only allow a specific IP to connect to the exposed port. I've managed to do this with this rule.

It drops all connections to port <Port> if source IP is not <RemoteIP>. I suppose that if you would want to completely block all connections, then simply remove the ! -s <RemoteIP> bit.

iptables -I PREROUTING 1 -t mangle ! -s <RemoteIP> -p tcp --dport <Port> -j DROP

@honi In Docker 1.5 (maybe 1.4?) there were several iptables changes. Can you verify if this is still a problem with 1.5?

@cpuguy83 I can confirm that this is still a problem with Docker 1.6

Adding --iptables=false to DOCKER_OPTS enables the expected behavior.

+1, still a problem with Docker 1.6

ping @mavenugo @mrjana

So what is the story here? With Docker version 1.7.0, build 0baf609 on Ubuntu 14 this is still completely broken. Also the installaton instructions on https://docs.docker.com/installation/ubuntulinux/ have a section "Enable UFW forwarding" which appears to be unnecessary. Anyone installing docker on an Ubuntu box exposes any forwarded ports from their containers to the outside world, and even worse looking at the ufw rules gives no hints that this is occurring which is needless to stay pretty bad.

Also with Docker 1.7 here. My experience is that Docker+UFW can facilitate two scenarios.

The first scenario and default behavior indeed exposes all mapped ports to the outside world; UFW cannot filter access to the containers.

Alternatively when setting the --iptables=false option, filtering incoming traffic with UFW works as expected. However, doing this stops the containers from making outbound connections to the outside world. Inter container communication still works. If you don't need outbound connectivity, then UFW together with --iptables=false seems to be a viable solution.

In my opinion a sensible default behavior for docker would be how it behaves currently with --iptables=false and allow outbound connections from the containers (or possibly make this easily configurable via a config option).

I don't have a problem getting out. Did you try:

ufw allow in on docker0

@newhook ufw allow in on docker0 doesn't work for me. Even with ufw disabled I can't get out with --iptables=false.

I have been experimenting with this a few hours now. I think I got it figured out.

... the installaton instructions on https://docs.docker.com/installation/ubuntulinux/ have a section "Enable UFW forwarding" which appears to be unnecessary.

The FORWARD chain _does_ need policy set to ACCEPT if you have --iptables=false. It only appears this is not needed because the Docker installation package auto starts Docker and adds iptable rules the FORWARD chain. When afterwards you add --iptables=false to your config and restart docker _those rules are still there_. After the next reboot these rules will be gone and your containers wont be able to communicate unless you have the FORWARD chain policy set to ACCEPT.

What you need for a setup that allows filtering with UFW, inter container networking and outbound connectivity is

  • start docker with --iptables=false
  • FORWARD chain policy set to ACCEPT
  • add the following NAT rule:
    iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE

You are indeed correct! After a reboot communication is gone. Those rules seem to sort everything out. Thanks very much!

start docker with --iptables=false
FORWARD chain policy set to ACCEPT
add the following NAT rule:
iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE

with this setup its not possible anymore to access exposed ports from within a container:

  • Container 1: exposed port 12345
  • login to Container 2:
    telnet 172.17.42.1 12345 does not work anymore

@dakky I can't reproduce your issue. I have no issues with inter container communication. I suggest making sure you expose the port in your Dockerfile. Also try flushing your iptables rules and delete all user-defined chains before configuring and enabling UFW.

In any case it would be good if someone from the Docker team can verify that the configuration I propose makes sense.

Hi, any update on this? I can't find any official source how to fix this.
Currently I have a simple setup like:

/etc/defaults/ufw: DEFAULT_FORWARD_POLICY="ACCEPT"
/etc/defaults/docker: DOCKER_OPTS="--iptables=false"

ufw enable
ufw allow 22/tcp
ufw deny 80/tcp
ufw reload

host# docker run -it --rm -p 80:8000 ubuntu bash
container# apt-get update
container# python3 -m http.server

.1 I can reach Internet from container
.2 Internet can reach container via public-address:80

Am I missing something here? 10x

I had managed to fix this through iptables mangle. The first 2 lines are optional if you want to allow access to some ports on eth1 (private network, if it exists).

sudo iptables -t mangle -A FORWARD -i eth1 -o docker0 -j ACCEPT
sudo iptables -t mangle -A FORWARD -i docker0 -o eth1 -j ACCEPT
sudo iptables -t mangle -A FORWARD -i docker0 -o eth0 -j ACCEPT
sudo iptables -t mangle -A FORWARD -i eth0 -o docker0 -j ACCEPT -m state --state ESTABLISHED,RELATED
sudo iptables -t mangle -A FORWARD -i eth0 -o docker0 -j DROP

This is still a problem. Is there a clear fix available? I don't expect a built-in solution. But maybe some iptables or or nat rules? I don't feel like testing all possible solutions in this issue now just to brick my system :smile:

Guys, this is a serious security issue. Why is there no hint in the documentation for it? Only by accident I found out, that my MySQL Port is wide open to the world. I absolutely didn't expect that as I've used ufw before and it was reliable enough to not spend another thought on it. So I trusted the advice to change the forward policy to ACCEPT. I would never have expected that it basically completely suspends ufw.

For the record, the solution from @VascoVisser worked for me with docker V1.10. Here are the files I had to change:

  • Set DEFAULT_FORWARD_POLICY="ACCEPT" in /etc/default/ufw
  • Set DOCKER_OPTS="--iptables=false" in /etc/default/docker
  • Add the following block with my custom bridge's ip range to the top of /etc/ufw/before.rules:

```
# nat Table rules
*nat
:POSTROUTING ACCEPT [0:0]

# Forward traffic from eth1 through eth0.
-A POSTROUTING -s 192.168.0.0/24 -o eth0 -j MASQUERADE

# don't delete the 'COMMIT' line or these nat table rules won't be processed
COMMIT
```

Note: I'm using a custom network for my docker containers, so you may have to change the 192.168.0.0 above to match your network range. The default is 172.17.0.0/16 as in Vasco's comment above.

UPDATE: On Ubuntu 16.04 things are different, because docker is started by systemd, so /etc/default/docker is ignored. The solution described here creates the file /etc/systemd/system/docker.service.d/noiptables.conf with this content

[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --iptables=false

and issue systemctl daemon-reload afterwards.

@mikehaertl I want to mention that this is not really an issue just with UFW, as it is just another layer over iptables. This is a general problem in my opinion.

@lenovouser Thing is, that the documentation has some recommendation which sounds like "do this and everything is fine with ufw". But that's definitely not the case, so there should be big warning signs there.

@mikehaertl Yep, definitely agree. I still don't really know how to use UFW with Docker properly. (After 5+ Months of going through / opening issues etc. Will try your solution later this day though :+1:.

@mikehaertl Downside of your solution is that e.g. an nginx-container is only logging the docker0 address in access.log...

To get usable access-logging with real world IP addresses, I use jwilder/nginx-proxy in front of nginx with --net=host and X-Forwarded-For but this still feels like a slightly insecure workaround to me.

See this thread in the official Docker forums: Running multiple docker containers with UFW and --iptables=false”.

@hbokh Are you sure that this is related to the change above? I can't see how it would modify the IP address. It only adds a MASQUERADE for connections initiated _from_ a docker container. Anything else is untouched. So if a wrong IP address is logged, it should also happen without the modification.

@mikehaertl It must be as far as I can see. I'm far from a ufw or iptables guy, so bear with me.
This is the access log when running nginx with the above 3 options enabled and ufw enabled - default deny, with some rules active to allow incoming traffic from specific addresses (nevermind the 500-error, since I skipped the fpm-container):

172.17.0.1 - - [15/Apr/2016:11:18:52 +0200] "GET / HTTP/1.1" 500 186 "-" "HTTPie/0.9.3"
172.17.0.1 - - [15/Apr/2016:11:18:54 +0200] "GET / HTTP/1.1" 500 186 "-" "HTTPie/0.9.3"

This is WITHOUT the three options and ufw completely disabled:

83.163.x.y - - [15/Apr/2016:11:26:18 +0200] "GET / HTTP/1.1" 500 186 "-" "HTTPie/0.9.3"
83.163.x.y - - [15/Apr/2016:11:26:19 +0200] "GET / HTTP/1.1" 500 186 "-" "HTTPie/0.9.3"

FYI Docker has added these lines to the top of the iptables chain in the last situation:

# Generated by iptables-save v1.4.21 on Fri Apr 15 11:33:02 2016
*nat
:PREROUTING ACCEPT [5:280]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
[ --- snip ---]

But what does then stop working with docker, if I disable the docker IP tables DOCKER_OPTS="--iptables=false", does anyone know about that?

I see no real solution out there currently. In my eyes docker should respect the manual IP tables that have been set (trough IP tables directly or trough ufw or else), however I think this is not doable.

So from my point of view currently it is just important that one keeps in mind, that once he opens a port with docker on 0.0.0.0 it will be available to the outside world, no matter what. If one decides not to open the port to the outside world trough docker, or if he runs an application on the host on that exact port without docker but natively, then all is fine and the firewall is still in place.

@icecrime I don't think that this is limited to Version 0.9. It's also still happening in latest 1.11.

^ I can confirm that; actually stumbled upon this thread yesterday while I was setting up new server :p Vasco's solution does the job tho :+1:

I'm inclined to say that there need only be a doc change.
Everything docker does is isolated to two chains ("DOCKER" and "DOCKER-ISOLATION") which docker itself creates.

Meanwhile, UFW thinks it is the single source of truth and does not actually read iptables for existing rules.

If you want to make sure ports aren't forwarded to an external interface then you need to make sure you specify which interface to bind to when creating the forwards, or you can even set the default interface on the daemon, which defaults to "0.0.0.0".

Alternatively, you can disable iptables support in docker all-together.

@cpuguy83 ufw is the standard firewall on Ubuntu which is probably used on tens of thousands of machines. Many developers will simply trust ufw. They will not expect that by default docker messes up their firewall and basically completely bypasses it.

Maybe a doc change is enough. But on the other hand, there's nothing worse then a hacked server due to an open firewall e.g. on MySQL Port 3306. If docker is taking potential security issues serious it should be farseeing. Thus maybe --iptables=false should really become the default.

As a sysadmin I fully agree with @mikehaertl.
What I (we?) need is also a way to block specific misbehaving IP-addresses to ports opened up by Docker. That used to be easy with UFW and without Docker, but with Docker it is not.
Workarounds like setting up a HAProxy-container in front or even "Docker Firewall Framework" (https://github.com/irsl/dfwfw) should not be necessary on default installations IMHO.

@hbokh Can you explain where the trouble lies? I should think that is inserted before the "DOCKER" chain will achieve this.

@cpuguy83 No it seems to be not, as far as I can judge.
Look at this post on the official Docker-forums where I have posted a situation where I only want to allow specific IP-addresses to connect to Docker opened ports:
https://forums.docker.com/t/running-multiple-docker-containers-with-ufw-and-iptables-false/8953/11
The situation is a little bit different than blocking misbehaving addresses, but nevertheless UFW-rules always seems to come in last in the chains.

The docker ubunto documentation https://docs.docker.com/engine/installation/linux/ubuntulinux/ says:

Also, UFW’s default set of rules denies all incoming traffic. If you want to reach your containers from another host allow incoming connections on the Docker port.

The docomentation should be changed at least here.

For anybody who's still struggling, I just deployed UFW + Docker 1.11.2 + Ubuntu 14.04 LTS on Digital Ocean using the --iptables=false docker option. It works out of the box without any extra configurations.

I was unknowingly exposing a MySQL 3306 port to the internet and luckily discovered it right now. I've also verified outbound connections and they seem to work just fine.

@activatedgeek Are you sure, that your containers now have access to the internet? I don't think so, as you're missing the masquerading setup described above

@mikehaertl I installed curl on one of my containers and did a curl google.com. Is that enough?

@activatedgeek Hmm, sounds good. Still surprised to see this working for you. It didn't for me.

@mikehaertl I'm surprised too. I just gave it a try and it worked. I'll be watching it over the next few days and report back. Just to be sure, can you try an install from scratch with the versions I have reported above?

All I did was update my /etc/default/docker to have DOCKER_OPTS="--iptables=false" and setup basic UFW rules. Nothing complicated.

@mikehaertl Even a ping should be fine right? That confirms DNS resolution (if using a DNS name) + outbound packets.

Hey @mikehaertl, I have been looking over this issue just to be sure and not to misguide others because I was using a pre-built docker image on Digital Ocean. It turns out Docker does its manipulations in the nat table and before I had modified the defaults to --iptables=false, it had already made the nat rule changes in iptables. They should go away on next reboot. That was the reason for Docker container networking to work without any updates.

That sounds about right. docker + ufw == bad news for unprotected servers. If you use AWS or similar you get protection from the infrastructure. If you use it on something like OVH you best know what you are doing.

@newhook I'm on Digital Ocean, no security groups. Hence, I'm figuring out the nitty gritty details by using a fresh Vagrant box. Right now I'm hanging mid air (the system somehow works fine though the Nginx proxy is reporting IP addresses of the local docker0 interface) I'll report back.

Is this problem solved by some other firewall? I have a feeling that all iptables based firewalls will face the same problem. UFW is not an exception.

@activatedgeek Could not test it yet, but I'm pretty sure, there's no difference here. Are you sure, you completely destroyed and re-created your container when doing the --iptables=false change? I think, the fw rules are persisted per container, even if the daemon option was changed and the daemon was restarted (not 100% sure about this, though).

@mikehaertl I'm in the middle of doing a full fresh local VM box test. I'll report back. I hope to make as little side-effects as possible during this process.

@mikehaertl Here are my test results (note that I did not setup UFW as our main challenge is figuring out where is iptables rule going wrong):

This is the script I have used to test things:

#!/usr/bin/env bash

# this command when --iptables=false
# iptables -t nat -A POSTROUTING ! -o docker0 -s 172.19.0.0/16 -j MASQUERADE

# start container
docker network create --subnet=172.19.0.0/16 nginx-net
docker run -d -p 2000:80 --name nginx1 --net=nginx-net nginx:stable-alpine
docker run -d -p 3000:80 --name nginx2 --net=nginx-net nginx:stable-alpine

echo

# check external connectivity
docker exec nginx1 ping -c 2 google.com
echo
docker exec nginx2 ping -c 2 google.com
echo

# check cross-container connectivity
docker exec nginx1 ping -c 2 nginx2
echo
docker exec nginx2 ping -c 2 nginx1
echo

# cleanup
docker rm -f nginx1 nginx2
docker network rm nginx-net

Output for --iptables=true (default)

5a2d89aea6668b0eed44cca201c09e223380867f99067d1f9be6cb25bcd0cc74
a534949a4ef89f1acc2f84563d7a415e3f4f5866b5f8b83447020cde39f86311

PING google.com (216.58.203.78): 56 data bytes
64 bytes from 216.58.203.78: seq=0 ttl=61 time=25.570 ms
64 bytes from 216.58.203.78: seq=1 ttl=61 time=25.707 ms

--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 25.570/25.638/25.707 ms

PING google.com (216.58.203.78): 56 data bytes
64 bytes from 216.58.203.78: seq=0 ttl=61 time=25.593 ms
64 bytes from 216.58.203.78: seq=1 ttl=61 time=25.685 ms

--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 25.593/25.639/25.685 ms

PING nginx2 (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.070 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.086 ms

--- nginx2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.070/0.078/0.086 ms

PING nginx1 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.119 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.087 ms

--- nginx1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.087/0.103/0.119 ms

nginx1
nginx2

#### Nginx Access Log Entry
172.16.1.1 - - [14/Jul/2016:13:44:31 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/601.7.6 (KHTML, like Gecko) Version/9.1.2 Safari/601.7.6" "-"

Output for --iptables=false (after masquerading)

4bbf64ff3c942d603276eb5c17530e5b76a0569e7e43903bfb76241d129c7745
7f7bb78e29039b547f47e01307f66c689c0b2530ad48034b114103e458993b76

PING google.com (216.58.203.78): 56 data bytes
64 bytes from 216.58.203.78: seq=0 ttl=61 time=25.761 ms
64 bytes from 216.58.203.78: seq=1 ttl=61 time=25.336 ms

--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 25.336/25.548/25.761 ms

PING google.com (216.58.203.78): 56 data bytes
64 bytes from 216.58.203.78: seq=0 ttl=61 time=26.078 ms
64 bytes from 216.58.203.78: seq=1 ttl=61 time=27.104 ms

--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 26.078/26.591/27.104 ms

PING nginx2 (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.063 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.072 ms

--- nginx2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.063/0.067/0.072 ms

PING nginx1 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.045 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.092 ms

--- nginx1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.045/0.068/0.092 ms

nginx1
nginx2

#### Nginx Access Log Entry
172.19.0.1 - - [14/Jul/2016:13:37:48 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/601.7.6 (KHTML, like Gecko) Version/9.1.2 Safari/601.7.6" "-"

So, I guess we do have a working solution of "--iptables=false + masquerading". But this starts trouble with the nginx proxy (as reported by @hbokh as well) which is not reporting the correct client IP address but instead starts reporting the gateway IP address of the bridge interface. (nginx-net in the above example). Can somebody help solve the final puzzle in iptables configuration?

We solve this by letting docker manage the iptables rules and creating a pre-docker chain which manages what we want to allow in/out of the containers.

@newhook Could you please elaborate? What do you mean by a pre-docker chain? Do you use UFW or have your own iptables management routines?

We let docker manage the iptables containers rules in the DOCKER chain. We create a new iptables chain called pre-docker at the head of the FORWARD chain which has the following:

-A pre-docker -i docker0 -o docker0 -j ACCEPT
-A pre-docker -i docker0 ! -o docker0 -j ACCEPT
-A pre-docker -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A pre-docker -m limit --limit 5/min -j LOG --log-prefix "pre-docker denied: " --log-level 7
-A pre-docker -j DROP

Prior the limit rule are any hosts we want to access to the containers (internal to our network).

-A pre-docker -s internal-ip-address-goes-here/32 -j ACCEPT

We then programatically add any firewall rules to the top of the pre-docker for anything that should be exposed to the outside.

For example:

-A pre-docker -d 172.17.1.242/32 -p tcp -m tcp --dport 25 -j ACCEPT

All this is done _after_ docker is started as the pre-docker chain must be before the DOCKER chain.

I added this to the top of /etc/ufw/before.rules (ufw was started after docker service), iptables screwed. I'll probably need to understand more. Any pointers here @newhook?

Its hard to offer advise when I'm not really sure what you've done or the state of the iptables after you adjusted the before.rules.

@newhook I haven't really made any changes to the default before.rules, just enabled iptables (--iptables=true) and added your rules. Which would mean the default DOCKER chain with its entries plus default UFW chain entries. The only change being I added your rules to the top of before.rules in the *filter section.

To help further:

  • Paste the output of iptables-save
  • What do you want to occur? (ie: contact port 80 in container XYZ) from host ABC (or all hosts).
  • What occurs?

Currently im testing another workaround but im not sure if this is a appropriate way. Maybe one can confirm/complain.. Its not directly related to ufw (we write our own iptables commands with jinja), but could be extended to work with this as well.

So far we had the chain input_ext which had a DROP at the very end and some rules infront targeting ACCEPT for specific source/port combinations. There, we also had restrictions for the containers (which dont work for obvious reasons):

-N input_ext
-A input_ext -s internal_network/range -p tcp -m conntrack --ctstate NEW,RELATED,ESTABLISHED -m tcp --dport container_port -j ACCEPT
-A input_ext -j DROP
-A INPUT -j input_ext

(this is just a summary)
The idea is to target the DOCKER chain instead. In the FORWARD chain one has to target the input_ext chain before the docker-entries.

Now, docker appends its rules in the DOCKER chain, thats fine. But if there are no docker-rules in the FORWARD chain, it will prepend them there (so that the forwarding to input_ext has no effect). So far our strategy was to flush all iptables entries, then write our own rules and then restart the docker daemon, to write the container specific rules. I wanted a solution that works with this strategy and also without too much changes in the input_ext chain.

Now the workaround:

  • flush iptables and build/fill usual chains :arrow_right: all Docker stuff is deleted
  • write the docker entries in the FORWARD chain, so that the docker daemon is not touching it again.

```
iptables -N DOCKER
iptables -A FORWARD -o docker0 -j DOCKER
iptables -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
iptables -A FORWARD -i docker0 -o docker0 -j ACCEPT

```

  • now at the top of FORWARD go to input_ext
    iptables -I FORWARD 1 -o docker0 -j input_ext
  • input_ext: instead of ACCEPT, forward a docker-port to the docker-chain again
    iptables -I input_ext 4 -s ip_range -p tcp -m conntrack --ctstate NEW,RELATED,ESTABLISHED -m tcp --dport container_port -j DOCKER
  • restart the docker-deamon :arrow_right: appends all docker specific rules to the docker chain.

resulting iptables:

...
Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    2   211 input_ext  all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    2   136 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.3           tcp dpt:container_port

Chain input_ext (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER     tcp  --  *      *       ip_range       0.0.0.0/0            ctstate NEW,RELATED,ESTABLISHED tcp dpt:container_port
 ...
    2   211 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0           

So far it passed my tests. im not sure if this breaks anything else or if the docker daemon will prepend/rewrite things in the FORWARD rule in some cases?

most simple solution should be:

ufw default allow routed

I can confirm that this is still an issue with docker 1.12.1 and UFW on Ubuntu 16.04

there is a nother iptables rule that is important in some cases:
iptables -t nat -A PREROUTING ! -i docker0 -p udp --dport 3478 -j DNAT --to-destination 172.17.0.7:3478
(-p udp/tcp depending on port type, --dport variable, IP also variable, check with docker inspect)

if you dont have this rule, containers will see docker0 bridge IP as source of incoming requests and not the clients real IP. my example causes a STUN server to return proper client IP instead of 172...

This is still an issue with docker 1.12.6 and UFW on Ubuntu 16.04.1 LTS

Another workaround:

  1. Standard first step. Add { "iptables": false } to /etc/docker/daemon.json and restart docker service sudo service docker restart
  2. Allow all docker networks. By default I am allowing docker0 network ufw allow in on docker0
  3. Check bridge networks: ifconfig | grep br- and add all of them: ufw allow in on br-12341234

Could somebody test this workaround?

Hi Folks,

here my solution/workaround. Most of the stuff is already written in the comments above.

I've tested this approach with Ubuntu 16.04 & Docker 1.13.1.

  1. Before installation of docker create the daemon.json in /etc/docker/ containing { "iptables": false }. If you do this after the installation of Docker, there are already iptables rules created by docker during its first startup at the end of the installation process

  2. Change UFW default forward policy to ACCEPT in the file /etc/default/ufw

  3. Add these after rules in /etc/ufw/after.rules:

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
COMMIT

and

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING ! -o docker_gwbridge -s 172.18.0.0/16 -j MASQUERADE
COMMIT

The 2nd entry is to allow containers attached to an overlay-network access to the internet.
Edit/Update In the initial comment there was an missing ! - see comment from @lsapan

  1. Set further forwarding configurations for UFW in /etc/ufw/sysctl.conf
          net/ipv4/ip_forward=1
          net/ipv6/conf/default/forwarding=1
          net/ipv6/conf/all/forwarding=1
  1. Disable all incoming traffic and allow only the IP (ranges) you would allow. Especially, if you plan to run swarm mode, all involved nodes should be added to the allow list.

At the end, my UFW status looks like this:

# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx            
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx            
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx           
22/tcp                     LIMIT IN    Anywhere                  
22/tcp (v6)                LIMIT IN    Anywhere (v6) 
  1. Deploy service without route mesh!
    As already stated, it looks like that swarm mode itself ignores iptables=false.
    Then you start a service using --publish <port:port> this ends again in ports which are accessible from all around the world. To avoid this, you have to use the _mode_ format in publish, e.g.:
docker service create \
--name myweb \
--replicas 5 \
--network testnet \
--publish mode=host,target=80,published=80 \
nginx

In this case, the port is published only on the node (host) and no iptables entries are created by swarm.

Now, you can add a UFW rule to allow some IP's to access these hosts on port 80.
You can think about using specific "frontend" nodes which are running a proxy server to access services behind wich have no ports exposed/published.

Unfortunately compose file 3.0/3.1 doesn't support this extended publish format but there is already a solution in sight: https://github.com/docker/docker/pull/30476 and several other PRs

So, I hope this helps.

Well, this was a lifesaver! I did need to make one change to @hcguersoy's steps above, but I'm using Docker 17.03.0-ce. Keep in mind I'm no UFW/iptables expert, but even with the above steps my containers could not connect to the outside world. I changed:

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 172.17.0.0/16 -o docker0 -j MASQUERADE
COMMIT

to:

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
COMMIT

I saw the ! used in @VascoVisser's original masquerade comment. After that change it started working.

@lsapan Hey Luke, you're absolute right, there is a missing ! - don't know how this can happen, may lost it during copy'n paste.
Thanks for pointing this out!

There are some great suggestions and solutions above!

That said, I've opted for a simpler way to get around the Docker/UFW issue. I didn't want to create snowflakes in configuration between hosts or have to maintain some form of configuration re: iptables.

I simply created a Docker overlay network, exposed my Docker containers to listen on the internal overlay network with no publicly exposed ports and proxied via NGINX container.

I know this doesn't work for everyone, but for my (simple) requirements this was a simple solution to a simple requirement.

So here's what I'm thinking:

Solution 1:
We can add a configuration to the daemon like --iptables-insert-after=<some chain>

Solution 2:
Have a dedicated chain (e.g. docker-user, or docker-pre where we always insert after.

Solution 3:
In addition to solution 2, include whatever the chain name is that ufw uses for it's early-on chains (can't remember what they are off the top of my head).

@o6uoq just want to confirm that you used plain old docker containers and not docker swarm mode services?

@cpuguy83 , as long as it is possible to satisfy the requirements of being able to block incoming traffic.
It would be nice to have all the nodes in a docker swarm mode being able to be exposed to the internet and take incoming traffic on specifically allowed ports.

@patran confirming that I used plain old Docker containers, nothing Swarm related used.

17.06 supports a DOCKER-USER chain where you can insert your own rules.

Does anyone know if it's possible to somehow use the new DOCKER-USER (and not having to set the --iptables=false launch option) to get it to play nicely with ufw (i.e. the rules set up in ufw are respected and exposing a port with docker doesn't mean it'll bypass ufw)?

@alltheoptions If I've understood the issue correctly, it's caused by Docker inserting iptables rules before UFW's rules. I don't see how that can be fixed by UFW.

I'd love to come up with a solution that does as @heyman suggests: Any way to use the new iptables DOCKER-USER table somehow to pass control over to ufw before jumping back into the docker-created rules?

BTW just in case it wasn't obvious, this is _still_ happening with the 17.06-ce version. Any new updates or clear solutions?

FYI I have added this bug to the bug tracker for ufw: https://bugs.launchpad.net/ufw/+bug/1717648 - Maybe the ufw folks can come up with a fix from their side.

You would inject a jump rule into the DOCKER-USER chain that would jump to the appropriate ufw chain. I'm assuming ufw-user-input.
Untested, but something like:

iptables -I DOCKER-USER 1 -j ufw-user-input

This would insert a rule into DOCKER-USER at the first position that would jump to the ufw-user-input chain.

Maybe ufw-user-input is not perfect because it'll likely skip other ufw chains.

So, I've done what everyone has suggested, but it seems that when I use DEFAULT_FORWARD_POLICY="ACCEPT", all ports on the host are open, no matter what i do.

My rules:

root@baremetal1:~# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
2375/tcp                       ALLOW IN    Anywhere
22/tcp                     ALLOW IN    Anywhere
9000                       DENY IN     Anywhere
9000 (v6)                  DENY IN     Anywhere (v6)

With our without the 9000 rules, i'm still able to hit a container published on port 9000 on the host

It actually appears like DNS resolutions end up failing if i set the default forward policy to DROP

TL;DR: I've added the masquerade settings, i've set the forward policy to accept, i've set the default incoming to deny, and outgoing to allow, enabled UFW, denied on 9000, but i can still hit a container on port 9000

@aequasi You'll have to explain what exactly you did.

@aequasi You didn't mention that you set --iptables=false. This is an important part because otherwhise docker will always override ufw rules. It could be necessary to reboot your machine to really get rid of all stale iptables rules.

otherwhise docker will always override ufw rules

Not if you put the rules in the DOCKER-USER chain.

Sorry, forgot to mention that. I did add that to my dockerd args. The machine has been restarted a couple times.

I should mention that I'm running on Rancher, which may be doing stuff to the IPTables as well...?

After investigating some more, turns out Rancher is indeed adding stuff to iptables as well. Fairly annoying.

Not if you put the rules in the DOCKER-USER chain.

@cpuguy83 You don't have this option with ufw. It sets up its own set of chains which is kind of the main point. If you need to fiddle with chains manually again then ufw doesn't make much sense - but that's what this issue is all about (ufw and docker's iptables option and why they are no friends).

@mikehaertl I understand that, but as mentioned above you can make jump rules to the ufw chains.

Oh, I see now. Missed that. If I'm not completely missing the point you'd have to manually add those rules, right? So they could be put into /etc/ufw/before.rules to let ufw add them.

I followed everything mentioned in this comment - now my docker ports are being secured by UFW as I want it. But am I the only one now, whose docker images are now unable to connect to the internet any longer, or did I missed a step mentioned somewhere else here?

@jgonsior The critical part is point 3. Did you've added the ! ?

Maybe some stuff has changed in Docker and / or Ubuntu, the original comment was based on Docker 1.13.1 and Ubuntu 16.04.

@jgonsior Running into same issue, worked before, I have a live setup with it working, using a fresh setup no longer working.

ok found out ufw doesn't like where the rules are going.

I've changed the rules to /etc/ufw/before.rules put before the *filter rules

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
-A POSTROUTING ! -o docker_gwbridge -s 172.18.0.0/16 -j MASQUERADE
COMMIT

@hcguersoy Everything works ok for me but when I create a custom network, containers cannot connect to internet.

$ docker network create foo
$ docker run --network=foo -t -i alpine ping -c 1 github.com
ping: bad address 'github.com'

I simply cannot add each network to POSTROUTING table as above, since every network is created dynamically by our build pipeline.

@teodor-pripoae Have you excluded DNS problems, e.g. tried to ping on an IP address instead of name?
Spontaneous I've no idea beyond that.

Yes, same problem.

$ docker run --network=foo -t -i alpine ping -c 1 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

I simply cannot add each network to POSTROUTING table as above, since every network is created dynamically by our build pipeline.

Well, but this is crucial for your containers to connect to the internet. So unless you find a solution here, the problem will remain.

@mikehaertl

I know but there should exist a solution for custom networks. I can't create a network, alter after.rules, restart ufw, run tests, alter after.rules again and then delete network.

@teodor-pripoae
I had the same problem until I kept only one custom network for all my dockers... now it works !

sudo docker network ls result :

NETWORK ID          NAME                  DRIVER              SCOPE
735783640c10        bridge                bridge              local
30e3a75a0136        elasticsearch_esnet   bridge              local
550c949a90cb        host                  host                local
86b8bb080f7f        none                  null                local

sudo docker ps result:

CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                              NAMES
4d7b582e5ca6        analytics-lc                                            "analyid --pidfile="     3 hours ago         Up 43 minutes       0.0.0.0:6800->6800/tcp             analytics-lc
87736e84bef4        docker.elastic.co/kibana/kibana:6.1.1                 "/bin/bash /usr/loca…"   3 hours ago         Up 45 minutes       0.0.0.0:5601->5601/tcp             kibana
ef356b16e1fa        docker.elastic.co/elasticsearch/elasticsearch:6.1.1   "/usr/local/bin/dock…"   4 hours ago         Up 45 minutes       0.0.0.0:9200->9200/tcp, 9300/tcp   elasticsearch2

sudo iptables -L -n -t nat result :

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  10.8.0.0/24          0.0.0.0/0
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
MASQUERADE  all  --  172.18.0.0/16        0.0.0.0/0

sudo ufw status verbose numbered result :

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere
2000/udp                   ALLOW IN    Anywhere
9200                       ALLOW IN    10.8.0.0/16
5601                       ALLOW IN    10.8.0.0/16
2375/tcp                   ALLOW IN    10.8.0.0/16
Anywhere on docker0        ALLOW IN    Anywhere
22/tcp (v6)                ALLOW IN    Anywhere (v6)
2000/udp (v6)              ALLOW IN    Anywhere (v6)
Anywhere (v6) on docker0   ALLOW IN    Anywhere (v6)

Anywhere                   ALLOW OUT   Anywhere on tun0
Anywhere (v6)              ALLOW OUT   Anywhere (v6) on tun0

I can't keep only one network, the build pipeline is creating a new one for every build and deletes it afterwards.

I fixed the problem by uninstalling ufw and using iptables.

Is this fully resolved by the use of the DOCKER-USER chain?

Any news on this?
My server is still open to the world..

I'm going to close this. Docker includes support for a DOCKER-USER chain where which all traffic is configured to pass through. This is where rules can be added and won't be touched by Docker.

Thanks!

@cpuguy83 Can you please elaborate in more detail about the recommended fix?

What I've see from you is:

iptables -I DOCKER-USER 1 -j ufw-user-input

1) Should we use this command verbatim?
2) Do we do this one time or on each host reboot?
3) Is it safe to run this command repeatedly (eg during each deploy)? Or is it meant to be only run once? If once, how do we check if the required rule is already in place?
4) After we run it, do we use ufw commands as usual (eg ufw allow 80/tcp) and it will work as expected?
5) Is the command above the only change needed, or do we set any config files/other settings?
6) Do we reload iptables somehow for this to take effect?

Thank you. Having clear guidance would be very helpful.

ufw is an iptables manager.
iptables rules need to be applied at every boot, or any time iptables is
flushed.
ufw stores it's own rules and applies this when ufw is started.

Should we use this command verbatim?

This was a suggested command, I'm not 100% positive that this is the exact
command you want to run.
But basically you want to add a rule to DOCKER-USER which hands off the
traffic to one of ufw's chains.

Do we do this one time or on each host reboot?

Managing DOCKER-USER is up to the user. iptables rules are not
persistent, so you'll need to make sure the rules get applied any time the
table is flushed (e.g. on reboot).

Is it safe to run this command repeatedly (eg during each deploy)? Or is
it meant to be only run once? If once, how do we check if the required rule
is already in place?

Do not run it repeatedly, it will just add uneccessary overhead.

Is the command above the only change needed, or do we set any config
files/other settings?

The command would add the rule to iptables, which is what needs to be done.
How you get it to iptables and make sure it is applied is up to you and
would likely require some configuration file somewhere.

On Sat, May 26, 2018 at 3:37 PM Tom J notifications@github.com wrote:

@cpuguy83 https://github.com/cpuguy83 Can you please elaborate in more
detail about the recommended fix?

What I've see from you is:

iptables -I DOCKER-USER 1 -j ufw-user-input

  1. Should we use this command verbatim?
  2. Do we do this one time or on each host reboot?
  3. Is it safe to run this command repeatedly (eg during each deploy)?
    Or is it meant to be only run once? If once, how do we check if the
    required rule is already in place?
  4. After we run it, do we use ufw commands as usual (eg ufw allow
    80/tcp) and it will work as expected?
  5. Is the command above the only change needed, or do we set any
    config files/other settings?

Thank you. Having clear guidance would be very helpful.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/moby/moby/issues/4737#issuecomment-392283608, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAwxZpf1lLGfIEEYbWKx0QAkhjgL-TjCks5t2a8UgaJpZM4Bqh_r
.

Thanks for the speedy reply. I'm not sure that this solves it on my end:

# iptables -I DOCKER-USER 1 -j ufw-user-input

# iptables -S | grep ufw-user-input
-N ufw-user-input
-A DOCKER-USER -j ufw-user-input
-A ufw-before-input -j ufw-user-input
-A ufw-user-input -p tcp -m tcp --dport 22 -j ACCEPT
-A ufw-user-input -s ****censored****/32 -p tcp -m tcp --dport 26257 -j ACCEPT
-A ufw-user-input -s ****censored****/32 -p tcp -m tcp --dport 8080 -j ACCEPT
-A ufw-user-input -s ****censored****/32 -p tcp -m tcp --dport 26257 -j ACCEPT
-A ufw-user-input -s ****censored****/32 -p tcp -m tcp --dport 26257 -j ACCEPT
-A ufw-user-input -s ****censored****/32 -p tcp -m tcp --dport 26257 -j ACCEPT
-A ufw-user-input -s ****censored****/32 -p tcp -m tcp --dport 26257 -j ACCEPT
-A ufw-user-input -s ****censored****/32 -p tcp -m tcp --dport 8080 -j ACCEPT

# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere                  
26257/tcp                  ALLOW IN    ****censored****           
8080/tcp                   ALLOW IN    ****censored****              
26257/tcp                  ALLOW IN    ****censored****              
26257/tcp                  ALLOW IN    ****censored****             
26257/tcp                  ALLOW IN    ****censored****             
26257/tcp                  ALLOW IN    ****censored****            
8080/tcp                   ALLOW IN    ****censored****            

Yet I can curl server-in-question:9911 from my local device, and get a response from a docker container that I expected to be blocked by the rules above.

I'm running docker-compose if that is relevant.

For the record, I was able to have a workable solution by:

1) disabling iptables in docker: echo '{"iptables": false}' | sudo tee /etc/docker/daemon.json > /dev/null
2) setting /etc/default/ufw back to default
3) rebooting to flush all iptables rules
4) deleting docker interfaces: docker network rm $(docker network ls | grep "bridge" | awk '/ / { print $1 }')
5) binding docker containers directly to HOST network (network_mode: host in docker compose)
6) setting ufw rules as usual

Incoming traffic is firewalled by my ufw rules, outgoing traffic works. Docker builds continue working when I add --network=host

This allows me to move on for the time being, but I would really like to have docker networking separated from my host network and still use ufw.

To this day there is still no "official" / recommended way of integrating Docker and ufw together. Docker's default config continues to be dangerous by punching holes in the firewall rules and many people (including myself) have been caught unintentionally exposing internal services to the outside world.

It would be great if Docker took this more seriously so that pre-existing firewall rules aren't bypassed in common deployment scenarios like Ubuntu+UFW.

After spending 2 hours reading various GitHub issues, I settled for the following workaround, which also works for custom container networks, based on this gist (HT @rubot):

Append the following at the end of /etc/ufw/after.rules (replace eth0 with your external facing interface):

# Put Docker behind UFW
*filter
:DOCKER-USER - [0:0]
:ufw-user-input - [0:0]

-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth0 -j ufw-user-input
-A DOCKER-USER -i eth0 -j DROP
COMMIT

And undo any and all of:

  • Remove "iptables": "false" from /etc/docker/daemon.json
  • Revert to DEFAULT_FORWARD_POLICY="DROP" in /etc/default/ufw
  • Remove any docker related changes to /etc/ufw/before.rules

Be sure to test that everything comes up fine after a reboot.

I still believe Docker's out of the box behavior is dangerous and many more people will continue to unintentionally expose internal services to the outside world due to Docker punching holes in otherwise safe iptables configs.

(edit: I didn't see the need to set MANAGE_BUILTINS=no and IPV6=no, or to fiddle with /etc/ufw/before.init, not sure why @rubot did that)

@tsuna I also found this slightly different solution on StackOverflow here. I'm not sure yet, which one is better as I had no time to fully analyze both. But I agree, that something like this should be part of the docker manual, considering that ufw is such a widely used firewall.

# BEGIN UFW AND DOCKER
*filter
:ufw-user-forward - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER -j RETURN -s 10.0.0.0/8
-A DOCKER-USER -j RETURN -s 172.16.0.0/12
-A DOCKER-USER -j RETURN -s 192.168.0.0/16

-A DOCKER-USER -j ufw-user-forward

-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 172.16.0.0/12

-A DOCKER-USER -j RETURN
COMMIT
# END UFW AND DOCKER

I found it too and preferred not adding those 9 rules pertaining to the RFC1918 address space because I don't see the value. I felt better just dropping traffic originating from the external interface.

The only notable difference is that the workaround I used ties into the ufw-user-input chain whereas that one ties into ufw-user-forward. In my case the ufw-user-forward chain is empty while the ufw-user-input contains rules based from my regular ufw config (e.g. open port 80/443 for nginx, 22 for SSH etc). So I felt like it was better to tie into ufw-user-input.

Hi @tsuna, thank you for your opinion.

In the case of using private IP address or ethernet cards. In my opinion, it's hard to say which solution is better. It depends on our requirements or network environments.

In some cases, it's better to use ethernet cards to filter traffics. In our case, we have a complex network environment. We also don't want all public/private networks to access the published container service, but specific public/private IP addresses. So I use IP ranges in my solution. And people can easily modify these IP ranges to meet their requirements, including using ethernet cards.

But, by using ufw-user-input, I'll keep my opinion unless we are using an older version of UFW which doesn't support ufw route.

For example, if we were already using the following command to allow port 80 on the host:

ufw allow 80

This means all published container services whose ports are 80 are exposed to the public by default. Maybe that's not we want.

I personally prefer using ufw-user-forward, I think this can prevent me from inadvertently exposing services that shouldn't be exposed.

ufw allow 80

This means all published container services whose ports are 80 are exposed to the public by default.

Maybe I misunderstand. But to be honest, that's exactly what I would expect. And I think that's the root what this issue here is all about. Why would you

  1. publish the container port to the host and then
  2. open this port in your firewall to the outside

if you don't want to make the service accessible? If you really don't want that, then you'd probably map the container port to some other port on the host that is denied from outside by ufw.

Hi @mikehaertl

Sorry for my bad English, maybe I couldn't explain it clearly.

Setup

Here I have a Linux VM with Docker pre-installed, and the IP address on eth1 is 192.168.56.99.

Add the following lines to the file /etc/ufw/after.rules

# Put Docker behind UFW
*filter
:DOCKER-USER - [0:0]
:ufw-user-input - [0:0]

-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth1 -j ufw-user-input
-A DOCKER-USER -i eth1 -j DROP
COMMIT

Reload UFW by running the command sudo ufw reload

Test firewall rules

Let's check the firewall rules:

sudo iptables-save | fgrep DOCKER-USER

:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth1 -j ufw-user-input
-A DOCKER-USER -i eth1 -j DROP

sudo ufw status

Status: active

Now all is set up. Let's use two web services as a demonstration.

Create an httpd service

Create an httpd service, mapping the host's port 8080 to the container's port 80.

docker run --rm -d --name httpd -p 8080:80 httpd:alpine

Test httpd service on the host

curl http://localhost:8080

We can see the output

<html><body><h1>It works!</h1></body></html>

Test the httpd service on another host. Because we haven't add any ufw rules, so the public cannot access the httpd service by 192.168.56.99:8080

curl --connect-timeout 3 http://192.168.56.99:8080

We get the error message

curl: (28) Connection timed out after 3003 milliseconds

Allow the public to access the httpd service

Let's use UFW to allow the public to access the httpd service.

sudo ufw allow 80

Please note, we have mapped the host port 8080 to the httpd container's port 80. But we cannot use the port number 8080.

From another host, let's re-run the command:

curl --connect-timeout 3 http://192.168.56.99:8080

Yes, we can see the output of httpd.

Create a nginx container, for internal service use only

Let's assume that the nginx service is an internal service and we DO NOT want the public access to the service.

Mapping the host port 9999 to the nginx container's port 80:

docker run --rm -d --name nginx -p 9999:80 nginx:alpine

Let's access the nginx on the host 192.168.56.99

curl http://localhost:9999/

Yes, we can see the output of nginx service.

But the public can also access the nginx service

We did nothing, but the public network can access this nginx service via 192.168.56.99:9999

From another host, run the following command:

curl http://192.168.56.99:9999/

We can access the nginx service. This is NOT what we want! This is an internal service, and it shouldn't be accessed from outside.

Let's check the rules of UFW on the host 192.168.56.99

sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
80                         ALLOW IN    Anywhere
80 (v6)                    ALLOW IN    Anywhere (v6)

How about running the command to deny port 9999?

sudo ufw deny 9999
sudo ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To Action From
-- ------ ----
80 ALLOW IN Anywhere
9999 DENY IN Anywhere
80 (v6) ALLOW IN Anywhere (v6)
9999 (v6) DENY IN Anywhere (v6)

It DOES NOT work. From another host we can still access the nginx service.

How to deny the public to visit the internal service nginx?

Find the IP address of nginx container

docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' nginx

172.17.0.3

Add the deny rule

sudo ufw insert 1 deny from any to 172.17.0.3
sudo ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To Action From
-- ------ ----
172.17.0.3 DENY IN Anywhere
80 ALLOW IN Anywhere
80 (v6) ALLOW IN Anywhere (v6)

The public network cannot access the nginx service now.

Done

Because the public service httpd and the internal service nginx have the same container port 80. So the command ufw allow 80 will expose the two services at the same time, unless we add some rules like:

sudo ufw allow from any to 172.17.0.2 port 80

or

sudo ufw deny from any to 172.17.0.3

We must use container ports or container IP addresses for these UFW allow/deny rules, like 80. Cannot use the host ports, like 8080, 9999

If there is another web server installed on the host directly, and the port is 80. We will use more rules to expose this web server and the public httpd container, and hide the internal nginx container.

I am not sure if this situation is what you need?

But for us, we don’t want this to happen.

Let me re-explain it in a simple way ^_^

If there is a Linux server:

  • Install an HAProxy server on the server, and listen on port 80
  • Run the command ufw allow 80 to allow the public access the HAProxy.

Create an httpd container, mapping the host's port 8080 to the httpd container's port 80.

docker run --rm -d --name httpd -p 8080:80 httpd:alpine

If using ufw-user-input chain, the httpd container will be exposed by default. Because the httpd container's port is same as the HAProxy server's, it's 80.

if using ufw-user-forward chain, the httpd container is still private. We can use the command ufw route allow 80 to expose the httpd container later.

@chaifeng Thanks for your detailled explanations. One question to your last example:

So if you now connect from outside to your host on port 8080 you'll reach your httpd container, even though you never issued ufw allow 8080? Is that correct? In that case I see your point.

Correct, ufw allow 8080 or ufw deny 8080 has no effect on accessing the httpd container, if we use ufw-user-input chain.

docker run --rm -d --name httpd -p 8080:80 httpd:alpine

To prevent ufw startup problems, we use the before.init. All in all targeting DOCKER_USER chain into ufw-user-input was our solution also, and works well enough. Despite we are not using any other nat rules, just leave that table alone.

docker_ufw_setup=https://gist.githubusercontent.com/rubot/418ecbcef49425339528233b24654a7d/raw/docker_ufw_setup.sh
bash <(curl -SsL $docker_ufw_setup)
# Reset and open port 22
RESET=1 bash <(curl -SsL $docker_ufw_setup)
DEBUG=1 bash <(curl -SsL $docker_ufw_setup)

https://gist.github.com/rubot/418ecbcef49425339528233b24654a7d

@rubot @tsuna As @chaifeng showed your solution is not bullet proof. I try to sum it up in my own words:

  • You have a host service public to the world with ufw allow 123 (123 is an arbitrary port)
  • You have a container that by default also listens on 123
  • You map port 123 from that container to port 456 on the host
  • Now your host port 456 is also open to the public even though you never added a rule for that in ufw

you map port 123 from that container to port 456 on the host

this should end up in an nat rule setup by docker, as docker is using masquerading.
all docker nat rules end up in DOCKER_USER input chain, which will drop all not explictly allowed ports.

-N DOCKER-USER
-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth0 -j ufw-user-input
-A DOCKER-USER -i eth0 -j DROP

can´t confirm that:

root@dev ~ # iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N DOCKER-INGRESS
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-INGRESS
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m addrtype --dst-type LOCAL -j DOCKER-INGRESS
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -o docker_gwbridge -m addrtype --src-type LOCAL -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.18.0.0/16 ! -o docker_gwbridge -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i docker_gwbridge -j RETURN
-A DOCKER-INGRESS -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.18.0.2:80
-A DOCKER-INGRESS -j RETURN

root@dev ~ # iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth0 -j ufw-user-input
-A DOCKER-USER -i eth0 -j DROP
-A DOCKER-USER -j RETURN
root@dev ~ # ufw status
Status: active

To                         Action      From
--                         ------      ----
80,443/tcp                 ALLOW       Anywhere
root@dev ~ # docker run --rm -p 8000:80 jwilder/whoami
Listening on :8000
→ curl dev:8000
curl: (7) Failed to connect to dev port 8000: Connection refused

ah, had an error

root@dev ~ # docker run --rm -p 8000:80 nginx
curl dev:8000
<!DOCTYPE html>
...

thanks, will check that

Hi @rubot

At the beginning of this thread, @Soulou has a comment https://github.com/moby/moby/issues/4737#issuecomment-38044320

Ufw is only setting things in the filter table. Basically, the docker traffic is diverted before and goes through the nat table, so ufw in this case is basically useless, if you want to drop the traffic for a container you need to add rules in the mangle/nat table.

http://cesarti.files.wordpress.com/2012/02/iptables.gif

Hi @rubot

I found your typo, the port of jwilder/whoami is 8000, not 80.

docker run --rm -p 8000:80 jwilder/whoami

should be

docker run --rm -p 9999:8000 jwilder/whoami

curl dev:9999

Thanks!

As we don´t have a rule in ufw-user-input that is allowing port 8000, this prevents as expected.
The problems are only, as you stated, with allowed ports, in ufw-user-input, which correspond to the containerside exposed port when publishing to an arbitrary hostside port. The problem didn´t show up for me as I only have a reduced set of 1:1 mappings, using docker swarm.
Thanks again for that hint.

One more thing I noticed, you modified the file /etc/ufw/before.init to create a chain. You don't need to do this.

As UFW use iptables-restore to restore rules from the file /etc/ufw/after.rules. Any new chain defined in this file will be created.

For example, add the following lines to the end of after.rules. The new chain ufw-docker will be created after restarting UFW.

*filter
:ufw-docker - [0:0]
:DOCKER-USER - [0:0]

-A DOCKER-USER -j ufw-docker

COMMIT

This crashed a lot of times, as MANAGE_BUILTINS=no is set for ufw. I decided to be more aggressive in cleaning up the rules manually.

At the beginning of this thread, @Soulou has a comment #4737 (comment)

Quickfix: https://gist.github.com/rubot/418ecbcef49425339528233b24654a7d#file-docker_ufw_setup-sh-L152

At the beginning of this thread, @Soulou has a comment #4737 (comment)

Quickfix: https://gist.github.com/rubot/418ecbcef49425339528233b24654a7d#file-docker_ufw_setup-sh-L152

This fix works in the first place as expected, so far.
Unfortunately it has the downside, the origin ip is lost, that was provided by host mode, using nginx stream and proxy_protocol

I tried to workaround with a static ip for the stream instance, but static ips for ingress are not supported, yet:

Because I have to finish it and can´t switch to something different, or even fiddle with custom docker iptables, a cronjob will help for retrieving the origin ip again

https://gist.github.com/rubot/418ecbcef49425339528233b24654a7d#file-docker_ufw_setup-sh-L55

Edit:
Talking about nat table -
Changed cron to only allow 1:1 port mappings for DOCKER chain.
DOCKER-INGRESS chain seems to be save, as there are only 1:1 port mappings
This affected container and services running under mode=host, which both get DNAT rules created in the DOCKER chain

@rubot @tsuna As @chaifeng showed your solution is not bullet proof. I try to sum it up in my own words:

* You have a host service public to the world with `ufw allow 123` (123 is an arbitrary port)

* You have a container that by default also listens on `123`

* You map port `123` from that container to port `456` on the host

* Now your host port `456` is also open to the public even though you never added a rule for that in

@mikehaertl, I could not replicate it, your scenario works fine for me, i.e. host port 456 is not publicly opened.

@xhafan I did not try it either and just tried to sumarize what @chaifeng found out. From a cursory look at the chains it sounded reasonable to me. Maybe @chaifeng can comment?

@rubot @tsuna As @chaifeng showed your solution is not bullet proof. I try to sum it up in my own words:

* You have a host service public to the world with `ufw allow 123` (123 is an arbitrary port)

* You have a container that by default also listens on `123`

* You map port `123` from that container to port `456` on the host

* Now your host port `456` is also open to the public even though you never added a rule for that in 

@mikehaertl, I could not replicate it, your scenario works fine for me, i.e. host port 456 is not publicly opened.

@xhafan

Make sure that ufw doesn't allow port 28080 first, but allows port 80.

Run the command to start an httpd container and publish the container port 80 on the host port 28080.

docker run -d --rm --name httpd -p 28080:80 httpd:alpine

We can access port 28080 via the IP address of the host from outside.

Even ufw deny 28080 cannot block accessing this httpd container from outside.

We can access port 28080 via the IP address of the host from outside.

Even ufw deny 28080 cannot block accessing this httpd container from outside.

@chaifeng, I tried what you suggested, and I can confirm that it opened the port 28080 from the outside. It does that also for nginx container. But, for some reason, jekyll container, which publishes a port too, is not opened from the outside. Here is my docker-compose.yml:

version: '3.5'
services:

  jekyll:
    image: jekyll/jekyll:3.8.3
    container_name: jekyll
    command: jekyll serve --force_polling
    ports:
      - 28081:4000

  httpd:
    image: httpd:alpine
    container_name: httpd
    ports:
      - 28080:80

28080 is open from the outside, but 28081 is not. That's why it gave me an impression that it's a working solution. Any idea why jekyll's published port is not opened from the outside?

Make sure that ufw doesn't allow port 28080 first, but allows port 80.

@xhafan Did you see this? You probably have 80 open, that's why 28080 is also open for your httpd container. In your jekyll case port 4000 must be open on the host. Then 28081 would get opened implicitely, too.

We can access port 28080 via the IP address of the host from outside.
Even ufw deny 28080 cannot block accessing this httpd container from outside.

@chaifeng, I tried what you suggested, and I can confirm that it opened the port 28080 from the outside. It does that also for nginx container. But, for some reason, jekyll container, which publishes a port too, is not opened from the outside. Here is my docker-compose.yml:

version: '3.5'
services:

  jekyll:
  image: jekyll/jekyll:3.8.3
  container_name: jekyll
  command: jekyll serve --force_polling
  ports:
    - 28081:4000

  httpd:
  image: httpd:alpine
  container_name: httpd
  ports:
    - 28080:80

28080 is open from the outside, but 28081 is not. That's why it gave me an impression that it's a working solution. Any idea why jekyll's published port is not opened from the outside?

I think the port 4000 is not allowed in UFW on your host. That the port 28080 is open is because the container port of httpd is 80, and port 80 is allowed on the host.

Allow port 4000, and you will find the port 28081 is open.

sudo ufw allow 4000

@mikehaertl, @chaifeng thanks for the explanation. That it quite weird behaviour, however the whole solution works for me. One needs to be aware of it though.

@tsuna I like your solution. But if I apply it the first time I disable and enable the ufw to apply the changes everything works as expected. But if I then reload the ufw or execute ufw disable/enable again I get the following error from ufw and ufw is then inactive.:

$ sudo ufw reload
ERROR: Could not load logging rules
$ sudo ufw status verbose
Status: inactive

The problem goes away if I comment out the rule -A DOCKER-USER -i eth0 -j ufw-user-input. But of course this rule is required to make user defined rules work.
If I set MANAGE_BUILTINS=yes in /etc/default/ufw it is also possible to restart/reload ufw. But after ufw has restarted I must also restart docker service to fix docker's iptable rules.
Disabling logging in /etc/ufw/ufw.conf with LOGLEVEL=off has no effect.

Edit: I know think I understand what's happening. The default setting for MANAGE_BUILTINS is no. Which means that ufw will not touch any chains except its own. But by adjusting the after.rules like @tsuna suggests we are changing other chains. Now ufw can't clean up the rules correctly.
I have decided to set MANAGE_BUILTINS to yes as a solution.

I don't know maybe it's not relevant already but when I create /etc/docker/daemon.json with such content:

{"iptables": false}

And restart docker sudo systemctl restart docker it starts to work without any additional efforts, so ports no longer available to the world.

So the question do I miss something or this is just fine?

Was this page helpful?
0 / 5 - 0 ratings