Bug Report Info
docker version
:
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (Client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API verson: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64
docker info
:
Containers: 41
Images: 172
Storage Driver: devicemapper
Pool Name: docker-253:2-4026535945-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 7.748 GB
Data Space Total: 107.4 GB
Data Space Available: 99.63 GB
Metadata Space Used: 12.55 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.135 GB
Udev Sync Supported: true
Deferred Removal Enabled: true
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-123.el7.x86_64
Operating SYstem: CentOS Linux 7 (Core)
CPUs: 24
Total Memory: 125.6 GiB
Name:
ID:
uname -a
:
Linux
Environment details (AWS, VirtualBox, physical, etc.):
Physical
iptables version 1.4.21
How reproducible:
Random
Steps to Reproduce:
Actual Results:
Cannot start container <container id>: iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.23 --dport 4000 -J ACCEPT: iptables: No chain/
target/match by that name.
Expected Results:
Container starts without a problem.
Additional info:
I'll also mention these containers are being launched via Apache Mesos (0.23.0) using Marathon. Appears similar to #13914.
Hi!
Please read this important information about creating issues.
If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.
This is an automated, informational response.
Thank you.
For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues
Use the commands below to provide key information from your environment:
docker version
:
docker info
:
uname -a
:
Provide additional environment details (AWS, VirtualBox, physical, etc.):
List the steps to reproduce the issue:
1.
2.
3.
Describe the results you received:
Describe the results you expected:
Provide additional info you think is important:
----------END REPORT ---------
Please note the other open issues with this error: https://github.com/docker/docker/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+No+chain%2Ftarget%2Fmatch+by+that+name
@cpuguy83 looks like some of those have the same error but not quite the same, #13914 seems to be similar.
@mindscratch Have you tried turning off firewalld?
@cpuguy83 we're not using firewalld just iptables
@mindscratch in that issue, upgrading to 1.8.3 seems to resolve the problem; are you still able to reproduce this on 1.8.3 (or 1.9.0)?
I'll have to look at our logs, we put in a cron job that attempts to find the issue and resolve it before it becomes a problem, shoo I haven't noticed. The cron job logs when if it has to fix iptables, so I'll check. I am now running 1.9.0.
I have that same problem.
โ docker cat docker-compose.yml
poste:
image: analogic/poste.io
volumes:
- "/srv/mail/data:/data"
ports:
- 25:25
- 80:8081
- 443:8443
- 110:110
- 143:143
- 465:465
- 587:587
- 993:993
- 995:995
โ docker docker-compose up poste
Recreating docker_poste_1
WARNING: Service "poste" is using volume "/data" from the previous container. Host mapping "/srv/mail/data" has no effect. Remove the existing containers (with `docker-compose rm poste`) to use the host volume mapping.
ERROR: Cannot start container 187de1f595dc544c503a4bf565d2101c0b0b3805d601ae704d0014750166776e: failed to create endpoint docker_poste_1 on network bridge: iptables failed: iptables -t nat -A DOCKER -p tcp -d 0/0 --dport 995 -j DNAT --to-destination 172.17.0.2:995 ! -i docker0: iptables: No chain/target/match by that name.
(exit status 1)
โ docker docker -v
Docker version 1.9.1, build a34a1d5
โ docker docker info
Containers: 2
Images: 73
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 77
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-0.bpo.4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
CPUs: 8
Total Memory: 23.59 GiB
Name: Debian-60-squeeze-64-minimal
ID: 7PDG:3ZCD:RL4G:KJAE:PZCO:XTUH:JLRX:IIM4:DHXM:TWHY:UMCK:4GUS
WARNING: No memory limit support
WARNING: No swap limit support
โ docker docker version
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:06:12 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:06:12 UTC 2015
OS/Arch: linux/amd64
โ docker uname -a
Linux Debian-60-squeeze-64-minimal 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u6~bpo70+1 (2015-11-11) x86_64 GNU/Linux
What information can I send to you?
I also have the same error
this issue occurs when I restart container after I stop the firewalld
docker version: Docker version 1.9.1, build a34a1d5
docker info:
uname -a: Linux databus0 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Provide additional environment details (AWS, VirtualBox, physical, etc.):
List the steps to reproduce the issue:
Describe the results you received:
Error response from daemon: Cannot restart container sth: failed to create endpoint sth on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 4444 -j DNAT --to-destination 172.17.0.5:4444 ! -i docker0: iptables: No chain/target/match by that name.
(exit status 1)
Error: failed to restart containers: [sth]
Describe the results you expected:
restart ok
Provide additional info you think is important:
----------END REPORT ---------
Overview
The following error occurs when trying to run "docker-compose run -d" - but only if 20+ ports are exposed to the host.
_ERROR: Cannot start container dcd5227651790c197835e3f2016f8c747bb748f86e95d6492c75f5e3f83ab47d: failed to create endpoint relaydocker_relay_1 on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 33320 -j DNAT --to-destination 172.17.0.2:30903 ! -i docker0: (fork/exec /sbin/iptables: cannot allocate memory)_
Bug Report Info
ubuntu@ip-172-31-36-213:~/relay_docker$ docker-compose up -d
Removing relaydocker_relay_1
Recreating 22ac1bb421_22ac1bb421_22ac1bb421_relaydocker_relay_1
ERROR: Cannot start container dcd5227651790c197835e3f2016f8c747bb748f86e95d6492c75f5e3f83ab47d: failed to create endpoint relaydocker_relay_1 on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 33320 -j DNAT --to-destination 172.17.0.2:30903 ! -i docker0: (fork/exec /sbin/iptables: cannot allocate memory)
ubuntu@ip-172-31-36-213:~/relay_docker$ docker version
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64
ubuntu@ip-172-31-36-213:~/relay_docker$ docker info
Containers: 69
Images: 563
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 701
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-74-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 1
Total Memory: 992.5 MiB
Name: relay-v1
ID: ZXD2:QKYD:UCX3:2KNK:5J7V:OWHH:CUCS:3V2N:LJWT:YV3N:4BLS:ZBYC
Username: vincentsiu
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
provider=amazonec2
ubuntu@ip-172-31-36-213:~/relay_docker$ uname -a
Linux ip-172-31-36-213 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@ip-172-31-36-213:~/relay_docker$
Dockerfile
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y openssh-server
#RUN apt-get -y install sudo
RUN mkdir -p /var/run/sshd
# configure sshd_config
RUN sed -i "s/PermitRootLogin.*/PermitRootLogin without-password/g" /etc/ssh/sshd_config
RUN sed -i "s/Port .*/Port 2200/g" /etc/ssh/sshd_config
RUN sed -i "s/LoginGraceTime.*/LoginGraceTime 30/g" /etc/ssh/sshd_config
RUN echo "GatewayPorts yes" >> /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# ssh port exposed for the container
EXPOSE 2200
# listening to these ports for port forwarding
EXPOSE 8079-8080
EXPOSE 9875-9876
EXPOSE 30000-31000
CMD ["/usr/sbin/sshd", "-D"]
Docker-compose.yml
relay:
restart: always
build: ./relay
ports:
- "2200:22"
- "8001-9876:8001-9876"
- "30000-31000:30000-31000"
command: /usr/sbin/sshd -D
if I try to expose port 30000-31000 in docker-compose.yml, then running 'Docker-compose up -d' will give me the "iptables failed" error.
_ERROR: Cannot start container dcd5227651790c197835e3f2016f8c747bb748f86e95d6492c75f5e3f83ab47d: failed to create endpoint relaydocker_relay_1 on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 33320 -j DNAT --to-destination 172.17.0.2:30903 ! -i docker0: (fork/exec /sbin/iptables: cannot allocate memory)_
If I reduce the number of exposed ports to less than 20, then the container will start without issue.
I have read that I can try restarting the docker daemon with --iptables=false. How can I do that with docker-compose?
@vincentsiu your issue sounds more related to https://github.com/docker/docker/issues/11185
I have a similar problem using docker 1.9.1 and centos7 (1511) on a ESXi VM
docker version
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:25:01 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:25:01 UTC 2015
OS/Arch: linux/amd64
docker info
Containers: 0
Images: 11
Server Version: 1.9.1
Storage Driver: btrfs
Build Version: Btrfs v3.16.2
Library Version: 101
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.4.4.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 1
Total Memory: 1.797 GiB
Name: swhost-1.rz.tu-bs.de
ID: YNJD:42IN:VKFR:OBQV:4OF3:EIZV:D7ML:MXTO:FJLL:IGP5:JVQG:5POK
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
If i start the registry v2 container with:
docker run --rm -ti -v /mnt/registry/content:/var/lib/registry -p 5000:5000 -v /mnt/registry/config/config.yml:/etc/docker/registry/conf.yml -v /etc/pki/tls/docker/:/mnt --name registry registry:2
the port is closed and i am not able to connect
Host is up (0.00025s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
5000/tcp closed upnp
unable to ping registry endpoint https://swhost-1:5000/v0/
v2 ping attempt failed with error: Get https://swhost-1:5000/v2/: dial tcp 134.169.8.97:5000: connection refused
v1 ping attempt failed with error: Get https://swhost-1:5000/v1/_ping: dial tcp 134.169.8.97:5000: connection refused
according to firewall-cmd, the port is open
firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno16780032
sources:
services: dhcpv6-client ssh
ports: 5000/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
iptables -L -v -n
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
10 536 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
10 400 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_direct all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_IN_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_IN_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_OUT_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_OUT_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
...
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.2 tcp dpt:5000
if i stop firewalld
systemctl stop firewalld
docker run --rm -ti -v /mnt/registry/content:/var/lib/registry -p 5000:5000 -v /mnt/registry/config/config.yml:/etc/docker/registry/config.yml -v /etc/pki/tls/docker/:/mnt --name registry registry:2
Error response from daemon: Cannot start container b6795863c0469c55e89244e12b764ce686948bfdea57542243beabbf81da4441: failed to create endpoint registry on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 5000 -j DNAT --to-destination 172.17.0.2:5000 ! -i docker0: iptables: No chain/target/match by that name.
(exit status 1)
We also notice this behaviour.
docker version:
Client:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:37:20 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:37:20 UTC 2015
OS/Arch: linux/amd64
docker info:
Containers: 12
Images: 592
Server Version: 1.9.0
Storage Driver: devicemapper
Pool Name: docker-253:0-2354982-pool
Pool Blocksize: 65.54 kB
Base Device Size: 107.4 GB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 15.72 GB
Data Space Total: 107.4 GB
Data Space Available: 38.99 GB
Metadata Space Used: 35.24 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.112 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.74 (2012-03-06)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.2.69-xnt-nogr-1.5.3
Operating System: Debian GNU/Linux 7 (wheezy)
CPUs: 16
Total Memory: 15.88 GiB
Name: dev03
ID: F4IG:2KNZ:TABI:SHGC:RWIN:3AYQ:5EX2:XI7N:DOHP:2VXQ:ASDK:RFF6
WARNING: No memory limit support
WARNING: No swap limit support
uname -a:
Linux dev03 3.2.69-xnt-nogr-1.5.3 #1 SMP Thu May 14 21:03:15 CEST 2015 x86_64 GNU/Linux
Provide additional environment details (AWS, VirtualBox, physical, etc.):
This environment is a XenServer virtual host.
iptables v1.4.14
List the steps to reproduce the issue:
Describe the results you received:
root@dev03:~# docker restart foo
Error response from daemon: Cannot restart container foo: failed to create endpoint foo on network bridge: iptables failed: iptables -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.9 --dport 1234 -j ACCEPT: iptables: No chain/target/match by that name.
(exit status 1)
Error: failed to restart containers: [foo]
Describe the results you expected:
A successful docker restart.
This is happening to me too.
Error response from daemon: Cannot restart container HAProxy: failed to create endpoint HAProxy on network bridge: iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.5 --dport 8888 -j ACCEPT: iptables: No chain/target/match by that name.
(exit status 1)
And if I run:
iptables -N DOCKER
iptables: Chain already exists.
FYI: Just to have in mind, I'm running docker-compose with the root user, and I didn't saw anyone in this post running commands with sudo or su.
Although restarting the docker service restores the heath of the system for a while at least, it is a horrible workaround....
Any other alternatives or ETA for when this will be fixed?
Best,
I have met a similar problem and it was solved by running this command:
# iptables -t filter -N DOCKER
Hope it helps!
It happened to us as well, but in our case iptables -t filter -L -v -n
showed that DOCKER chain exists, only when checking the nat table using iptables -t nat -L -v -n
we found out that somehow DOCKER chain was disappear...
Chain PREROUTING (policy ACCEPT 6402K packets, 388M bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 981K packets, 62M bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 1001K packets, 63M bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 514K packets, 31M bytes)
pkts bytes target prot opt in out source destination
83M 5047M FLANNEL all -- * * 192.168.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 192.168.18.135 192.168.18.135 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.167 192.168.18.167 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.167 192.168.18.167 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.172 192.168.18.172 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.186 192.168.18.186 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.186 192.168.18.186 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.194 192.168.18.194 tcp dpt:53
0 0 MASQUERADE udp -- * * 192.168.18.194 192.168.18.194 udp dpt:53
0 0 MASQUERADE tcp -- * * 192.168.18.197 192.168.18.197 tcp dpt:3000
0 0 MASQUERADE tcp -- * * 192.168.18.198 192.168.18.198 tcp dpt:1936
0 0 MASQUERADE tcp -- * * 192.168.18.198 192.168.18.198 tcp dpt:443
0 0 MASQUERADE tcp -- * * 192.168.18.198 192.168.18.198 tcp dpt:88
0 0 MASQUERADE tcp -- * * 192.168.18.198 192.168.18.198 tcp dpt:80
0 0 MASQUERADE tcp -- * * 192.168.18.2 192.168.18.2 tcp dpt:53
0 0 MASQUERADE udp -- * * 192.168.18.2 192.168.18.2 udp dpt:53
0 0 MASQUERADE tcp -- * * 192.168.18.4 192.168.18.4 tcp dpt:1936
0 0 MASQUERADE tcp -- * * 192.168.18.4 192.168.18.4 tcp dpt:443
0 0 MASQUERADE tcp -- * * 192.168.18.4 192.168.18.4 tcp dpt:88
0 0 MASQUERADE tcp -- * * 192.168.18.4 192.168.18.4 tcp dpt:80
0 0 MASQUERADE tcp -- * * 192.168.18.5 192.168.18.5 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.6 192.168.18.6 tcp dpt:3000
0 0 MASQUERADE tcp -- * * 192.168.18.8 192.168.18.8 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.8 192.168.18.8 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.9 192.168.18.9 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.9 192.168.18.9 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.10 192.168.18.10 tcp dpt:8080
Chain FLANNEL (1 references)
pkts bytes target prot opt in out source destination
5481K 332M ACCEPT all -- * * 0.0.0.0/0 192.168.0.0/16
426K 27M MASQUERADE all -- * * 0.0.0.0/0 !224.0.0.0/4
After restarting docker daemon everything worked fine and we could see DOCKER chain came back to nat table:
Chain PREROUTING (policy ACCEPT 5765 packets, 347K bytes)
pkts bytes target prot opt in out source destination
1592 96542 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 1236 packets, 75057 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 3135 packets, 203K bytes)
pkts bytes target prot opt in out source destination
1 77 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 2423 packets, 159K bytes)
pkts bytes target prot opt in out source destination
83M 5047M FLANNEL all -- * * 192.168.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 192.168.18.135 192.168.18.135 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.167 192.168.18.167 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.167 192.168.18.167 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.172 192.168.18.172 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.186 192.168.18.186 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.186 192.168.18.186 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.194 192.168.18.194 tcp dpt:53
0 0 MASQUERADE udp -- * * 192.168.18.194 192.168.18.194 udp dpt:53
0 0 MASQUERADE tcp -- * * 192.168.18.197 192.168.18.197 tcp dpt:3000
0 0 MASQUERADE tcp -- * * 192.168.18.198 192.168.18.198 tcp dpt:1936
0 0 MASQUERADE tcp -- * * 192.168.18.198 192.168.18.198 tcp dpt:443
0 0 MASQUERADE tcp -- * * 192.168.18.198 192.168.18.198 tcp dpt:88
0 0 MASQUERADE tcp -- * * 192.168.18.198 192.168.18.198 tcp dpt:80
0 0 MASQUERADE tcp -- * * 192.168.18.2 192.168.18.2 tcp dpt:53
0 0 MASQUERADE udp -- * * 192.168.18.2 192.168.18.2 udp dpt:53
0 0 MASQUERADE tcp -- * * 192.168.18.4 192.168.18.4 tcp dpt:1936
0 0 MASQUERADE tcp -- * * 192.168.18.4 192.168.18.4 tcp dpt:443
0 0 MASQUERADE tcp -- * * 192.168.18.4 192.168.18.4 tcp dpt:88
0 0 MASQUERADE tcp -- * * 192.168.18.4 192.168.18.4 tcp dpt:80
0 0 MASQUERADE tcp -- * * 192.168.18.5 192.168.18.5 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.6 192.168.18.6 tcp dpt:3000
0 0 MASQUERADE tcp -- * * 192.168.18.8 192.168.18.8 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.8 192.168.18.8 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.9 192.168.18.9 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.9 192.168.18.9 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.10 192.168.18.10 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.2 192.168.18.2 tcp dpt:53
0 0 MASQUERADE udp -- * * 192.168.18.2 192.168.18.2 udp dpt:53
0 0 MASQUERADE tcp -- * * 192.168.18.5 192.168.18.5 tcp dpt:3000
0 0 MASQUERADE tcp -- * * 192.168.18.6 192.168.18.6 tcp dpt:5601
0 0 MASQUERADE tcp -- * * 192.168.18.7 192.168.18.7 tcp dpt:8201
0 0 MASQUERADE tcp -- * * 192.168.18.7 192.168.18.7 tcp dpt:8200
0 0 MASQUERADE tcp -- * * 192.168.18.8 192.168.18.8 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.9 192.168.18.9 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.10 192.168.18.10 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.10 192.168.18.10 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.11 192.168.18.11 tcp dpt:8081
0 0 MASQUERADE tcp -- * * 192.168.18.11 192.168.18.11 tcp dpt:8080
0 0 MASQUERADE tcp -- * * 192.168.18.12 192.168.18.12 tcp dpt:1936
0 0 MASQUERADE tcp -- * * 192.168.18.12 192.168.18.12 tcp dpt:443
0 0 MASQUERADE tcp -- * * 192.168.18.12 192.168.18.12 tcp dpt:88
0 0 MASQUERADE tcp -- * * 192.168.18.12 192.168.18.12 tcp dpt:80
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 to:192.168.18.2:53
0 0 DNAT udp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 to:192.168.18.2:53
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3210 to:192.168.18.5:3000
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5601 to:192.168.18.6:5601
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8201 to:192.168.18.7:8201
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8200 to:192.168.18.7:8200
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8050 to:192.168.18.8:8080
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9002 to:192.168.18.9:8080
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8041 to:192.168.18.10:8081
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8040 to:192.168.18.10:8080
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8081 to:192.168.18.11:8081
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:192.168.18.11:8080
27 1620 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:1936 to:192.168.18.12:1936
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 to:192.168.18.12:443
139 8340 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:88 to:192.168.18.12:88
24 1440 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:192.168.18.12:80
Chain FLANNEL (1 references)
pkts bytes target prot opt in out source destination
5489K 332M ACCEPT all -- * * 0.0.0.0/0 192.168.0.0/16
427K 27M MASQUERADE all -- * * 0.0.0.0/0 !224.0.0.0/4
If someone has a clue for why the chain disappear I'll be more than happy to hear about it.
Exactly the same issue here as @shayts7 is describing. Workaround for now is to restart the daemon:
service docker restart
@fredrikaverpil Great! It worked!
Hello everyone,
I'm using coreos and have this problem too but only on my master.
Running iptables -t nat -N DOCKER
solves the problem, pods are automatically created and everything is fine. I'm looking to know why this chain is removed on my master and not on my workers.
Was having this issue. For us it turned out docker was starting before our firewall persistence (iptables-persistent) and its rules were getting overwritten. I resolved by removing the package as we were using it for only 1 rule.
There are ways to keep it working side by side by either ensuring docker starts after iptables-persistent(https://groups.google.com/forum/#!topic/docker-dev/4SfOwCOmw-E) or by adding whatever rules the docker service adds into the persistent iptables configuration(didn't test this).
May be of help @Seraf, @shayts7
This is not a docker bug but maybe it should be addressed in docs or something
@vlad-vintila-hs Thanks for the tip
Same issue here Ubuntu 14.04 with docker 1.11.1 and docker-compose 1.7.1 no workaround solved the problem.
Solved with a machine reboot, a poor solution by the way.
This seems to only happen on CentOS 7 for me.
This is what I did
stop firewalld
sudo systemctl stop firewalld
sudo systemctl disable firewalld
Restart your machine
sudo reboot
As long as you've put --restart=always to your docker instance. When your machine is reboot, the docker instance should be running, and the port should be binded. I believe this issue is specificly to CentOS7 family who uses firewalld instead of iptables.
@vlad-vintila-hs I encountered the same.Thanks for the tip.
follow @fredrikaverpil .Thank you.
I try:
ip link delete docker0
systemctl restart docker
@kanlidy ็จไฝ ็ๆนๆณ่งฃๅณไบ, ๅค่ฐข.
This worked for me !! on all CentOS7.2 systems
ip link delete docker0
systemctl restart docker
good
This only fixes running containers using:
docker run -d -p ...
but it still doesn't work on docker swarm 1.12 and 1.12.1 using
docker service create --publish ...
It won't open port and complains about failed: iptables: No chain/target/match by that name
on series of iptables rules in firewalld logs
Got the same problem as @virtuman, working with docker 1.12.1 on Photon OS with no firewalld active.
[ ~ ]# docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 17:52:38 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 17:52:38 2016
OS/Arch: linux/amd64
[ ~ ]# iptables -t nat -L -v -n
Chain PREROUTING (policy ACCEPT 445 packets, 26876 bytes)
pkts bytes target prot opt in out source destination
88136 5323K DOCKER-INGRESS all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
9768 590K DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 445 packets, 26876 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 336 packets, 20552 bytes)
pkts bytes target prot opt in out source destination
55 3740 DOCKER-INGRESS all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
5 322 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 336 packets, 20552 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
3 246 MASQUERADE all -- * !docker_gwbridge 172.19.0.0/16 0.0.0.0/0
25 1380 MASQUERADE all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match src-type LOCAL
23 1619 MASQUERADE all -- * !docker_gwbridge 172.18.0.0/16 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- docker_gwbridge * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-INGRESS (2 references)
pkts bytes target prot opt in out source destination
9759 589K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
On overlay network:
[ ~ ]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "5glt9tb0yaoqz8l89mp3jdrkc",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "259"
},
"Labels": null
}
]
[ ~ ]# docker service inspect mytest
[
{
"ID": "0nh2vprk2w0mmkj4cve7v1qsg",
"Version": {
"Index": 518
},
"CreatedAt": "2016-09-16T10:24:43.236431157Z",
"UpdatedAt": "2016-09-16T10:24:43.238035043Z",
"Spec": {
"Name": "mytest",
"TaskTemplate": {
"ContainerSpec": {
"Image": "nginx"
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause"
},
"Networks": [
{
"Target": "5glt9tb0yaoqz8l89mp3jdrkc"
}
],
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080
}
],
"VirtualIPs": [
{
"NetworkID": "ba71d2djs4vvsdvsrcfltbktr",
"Addr": "10.255.0.4/16"
},
{
"NetworkID": "5glt9tb0yaoqz8l89mp3jdrkc",
"Addr": "10.0.1.2/24"
}
]
},
"UpdateStatus": {
"StartedAt": "0001-01-01T00:00:00Z",
"CompletedAt": "0001-01-01T00:00:00Z"
}
}
]
docker.service debug output:
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.387872821+02:00" level=debug msg="/usr/sbin/iptables, [--wait -t nat -L DOCKER-INGRESS]"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.390079278+02:00" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C DOCKER-INGRESS -j RETURN]"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.391680179+02:00" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER-INGRESS]"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.394019719+02:00" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-INGRESS]"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.396305291+02:00" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C POSTROUTING -m addrtype --src-type LOCAL -o docker_gwbridge -j MASQUERADE]"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.398715424+02:00" level=debug msg="/usr/sbin/iptables, [--wait -t nat -L DOCKER-INGRESS]"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.401384962+02:00" level=debug msg="/usr/sbin/iptables, [--wait -t nat -I DOCKER-INGRESS -p tcp --dport 8080 -j DNAT --to-destination 172.19.0.2:8080]"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.404212138+02:00" level=debug msg="Creating service for vip 10.255.0.4 fwMark 259 ingressPorts []*libnetwork.PortConfig{&libnetwork.PortConfig{Name: \"\",\nProtocol: 0,\nTargetPort: 0x50,\nPublishedPort: 0x1f90,\n}}"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19+02:00" level=info msg="Firewalld running: false"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19+02:00" level=error msg="setting up rule failed, [-t nat -A POSTROUTING -m ipvs --ipvs -d 10.255.0.0/16 -j SNAT --to-source 10.255.0.2]: (iptables failed: iptables --wait -t nat -A POSTROUTING -m ipvs --ipvs -d 10.255.0.0/16 -j SNAT --to-source 10.255.0.2: iptables: No chain/target/match by that name.\n (exit status 1))"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.435268378+02:00" level=error msg="Failed to add firewall mark rule in sbox /var/run/docker/netns/be3e41082632: reexec failed: exit status 5"
Sep 16 12:26:19 primapp03 docker[25944]: time="2016-09-16T12:26:19.435677746+02:00" level=error msg="Failed to create real server 10.255.0.5 for vip 10.255.0.4 fwmark 259 in sb ingress-sbox: no such process"
I'm having the same issue as @virtuman and @ummecasino. When I create a service with docker service create --publish ...
, firewalld shows:
ERROR: COMMAND_FAILED: '/sbin/iptables -w2 -t nat -C POSTROUTING -m ipvs --ipvs -d 10.255.0.0/16 -j SNAT --to-source 10.255.0.8' failed: iptables: No chain/target/match by that name.
And, I can't connect to the service outside of localhost. How do I get this to work, besides shutting off firewalld?
See https://github.com/docker/docker/issues/16816#issuecomment-197397543
For me, I do the same thing except instead of nat, I specify filter.
restarting docker worked for me. The issue occurred when I turned off firewalld. restarting docker will update the iptable rules
I had a comparable issue with iptables. I'm running a container pair with Nginx and Django. I noticed my website was extremely slow to respond (20 seconds), so I wanted to restart the containers with docker-compose:
[root@cvast cvast]# docker-compose down
[...]
[root@cvast cvast]# docker-compose up -d
Creating network "cvast_default" with the default driver
ERROR: Failed to Setup IP tables: Unable to enable SKIP DNAT rule: (iptables failed: iptables --wait -t nat -I DOCKER -i br-bff977f9efd3 -j RETURN: iptables: No chain/target/match by that name.
(exit status 1))
I only tried this and it fixed it right away:
systemctl restart docker.service
Some debug info:
[vincent@cvast ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 (Maipo)
[vincent@cvast ~]$ uname -a
Linux cvast 3.10.0-327.36.1.el7.x86_64 #1 SMP Wed Aug 17 03:02:37 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
md5-6c4d25930a3973c0e452ca3148b575fd
[vincent@cvast ~]$ docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64
This is no familiar terrain for me, so I hope anyone can explain to me what happened and/or if it was something I did. My firewall is managed by the university I work for; could it be something they changed? Was the slowness of my website related? It is fast as normal now after I restarted docker.service.
Thank you very much!
@veuncent docker creates the docker chain in IPTables rules on startup; if some other system (such as firewalld) is removing those rules after docker is started, this error can occur. Make sure the docker daemon i started _after_ firewalld.
Also ping @aboch, as I think there were some changes recently for how this error is handled.
Tested out this fix today : https://github.com/docker/libnetwork/pull/1658
We were hitting "unable to remove jump to DOCKER-ISOLATION rule in FORWARD chain: (iptables failed: iptables --wait -D FORWARD -j DOCKER-ISOLATION: iptables: No chain/target/match by that name." intermittently on Centos7.3 when running "docker-compose up" multiple times. #1658 fixed the issue for us.
fixed ?
Hi
I used docker-compose command to start elasticSerach, logstash and Kibana and run normally for several hours, then ELK can not work properly. So I tried to restart the docker elasticSearch or logstatsh or Kibana but met similar problem.
Steps to reproduce the issue:
Describe the results you received:
Error response from daemon: Cannot restart container
Describe the results you expected:
Docker can run normally without this problem and no need restart.
Additional information you deem important (e.g. issue happens only occasionally):
The problem happened after several hours normal running.
Output of docker version
:
Client:
Version: 17.03.1-ce
API version: 1.27
Go version: go1.7.5
Git commit: c6d412e
Built: Mon Mar 27 17:07:28 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.1-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: c6d412e
Built: Mon Mar 27 17:07:28 2017
OS/Arch: linux/amd64
Experimental: false
Output of docker info
:
Containers: 8
Running: 0
Paused: 0
Stopped: 8
Images: 5
Server Version: 17.03.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 77
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
init version: 949e6fa
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.866 GiB
Additional environment details (AWS, VirtualBox, physical, etc.):
uname -a
Linux scav-dev.fordme.com 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux
Problem also happens on openSUSE Tumbleweed.
But,
ip link delete docker0
systemctl restart docker
Solves it too.
https://github.com/docker/libnetwork/pull/1658 was merged into docker 17.03.2 and is included in docker 17.04.0 and up, and should resolve issues in this area. Unfortunately, this issue has become a collection of issues that may, or may not be related, so the fix may not resolve all occurrences of this message (e.g., firewalld
ripping out IPTables rules while docker is running can still be an issue)
firewalled remove DOCKER's rule
do something that systemctl restart docker to solves
Hello everyone!
I have faced this issue, and found out that after running my firewall script it removes the DOCKER chain, reason why it gets this error... so, when restarting docker service, it will fix this problem, because docker recreate the chains used by its service.
To fix:
service docker stop
service docker start
But, it would be nice if when running any create container command docker check if there is its chain or recreate that.
Would it be possible to update it?
Sorry not being possible to contribute with a pull request.
Hi All,
I faced the same problem, it fixed for me.
Enter below command, it will clear all chains.
iptables -t filter -F
iptables -t filter -X
Then restart Docker Service using below comamnd
systemctl restart docker
I hope it will work.
@Rajesh-jai
I would suggest you do not flush neither delete your firewal rules... unless if you do know what you are doing.
Restarting your docker will recreate rules that will allow your access to your containers, without the need of flushing and deleting all iptables rules.
Carefull with that. You might get you server too open with that.
Cheers!
In centos7.1 and docker 1.10.3-46, I restart docker service then solve the problem.
I can consistently replicate the problem using the following steps:
Ob CentOS Linux release 7.3.1611 (Core)
I get the following error:
ERROR: for webfront Cannot restart container 4cf3aa80c0ca093f311b064c4318477e0d64654e0e3b2921f2e130b3004fe125: driver failed programming external connectivity on endpoint webfront (db42a8b5113b0ed0386a7232004144ba3ee0464eeeee205e04eeac9c19ddad04): iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 8093 -j DNAT --to-destination 172.21.0.7:80 ! -i br-1b5d4184a095: iptables: No chain/target/match by that name.
(exit status 1)
One fix is to disable the firewall integration (?) described here: https://github.com/moby/moby/issues/1871#issuecomment-238761325
handy scripts to have it around
docker_rm_all () {
for c in `docker ps -a | awk '{ print $1 }'`; do
if [[ "$c" == "CONTAINER" ]];then
echo "Removing all in 2 seconds. Last chance to cancel.";
sleep 2;
else
docker rm -f $c;
fi
done
}
docker_kill_all () {
for c in `docker ps | awk '{ print $1 }'`; do
if [[ "$c" == "CONTAINER" ]];then
echo "Removing all in 2 seconds. Last chance to cancel.";
sleep 2;
else
docker kill $c;
fi
done
}
docker_bash () {
docker exec -ti $1 bash;
}
docker_service_restart ()
{
if [[ "$1" == "" ]]; then
echo "please set HTTP_ENV before restart"
exit 1
fi
sudo https_proxy="$1" \
http_proxy="$1" \
HTTP_PROXY="$1" \
HTTPS_PROXY="$1" \
service docker restart
}
set_proxy () {
export HTTP_PROXY=http://$1
export HTTPS_PROXY=https://$1
export http_proxy=http://$1
export https_proxy=https://$1
}
unset_proxy () {
unset HTTP_PROXY
unset HTTPS_PROXY
unset http_proxy
unset https_proxy
}
just add it to your bashrc
# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
# docker version
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:25 2017
OS/Arch: linux/amd64
Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:25 2017
OS/Arch: linux/amd64
Experimental: false
journalctl:
Jan 23 16:27:34 localhost.localdomain kernel: br0: port 3(veth159) entered blocking state
Jan 23 16:27:34 localhost.localdomain kernel: br0: port 3(veth159) entered forwarding state
Jan 23 16:27:34 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth50b629e: link becomes ready
Jan 23 16:27:34 localhost.localdomain kernel: docker_gwbridge: port 2(veth50b629e) entered blocking state
Jan 23 16:27:34 localhost.localdomain kernel: docker_gwbridge: port 2(veth50b629e) entered forwarding state
Jan 23 16:27:34 localhost.localdomain kernel: br0: port 3(veth159) entered disabled state
Jan 23 16:27:34 localhost.localdomain firewalld[638]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -m ipvs --ipvs -d 10.255.0.0/16 -j SNAT --to-source 10.255.0.2' failed: iptables: No chain/target/match by that name.
Jan 23 16:27:34 localhost.localdomain kernel: IPVS: __ip_vs_del_service: enter
Jan 23 16:27:34 localhost.localdomain kernel: docker_gwbridge: port 2(veth50b629e) entered disabled state
Jan 23 16:27:34 localhost.localdomain kernel: docker_gwbridge: port 2(veth50b629e) entered disabled state
hi guys,
I'm having an error with iptables.
Error response from daemon: Cannot start container 5f358335562f6e0234ec7fea50f9c5cb6a0b44ec16a6c2f09825fe8ce560a135: iptables failed: iptables -t nat -A DOCKER -p tcp -d 0/0 --dport 80 -j DNAT --to-destination 172.17.0.7:80 ! -i docker0: iptables: No chain/target/match by that name.
(exit status 1)
cat /etc/centos-release
CentOS release 6.9 (Final)
iptables --version
iptables v1.4.7
docker info
Containers: 3
Images: 37
Storage Driver: devicemapper
Pool Name: docker-253:0-400615-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.886 GB
Data Space Total: 107.4 GB
Data Space Available: 41.53 GB
Metadata Space Used: 2.626 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.117-RHEL6 (2016-12-13)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 2.6.32-431.29.2.el6.x86_64
Operating System: <unknown>
CPUs: 4
Total Memory: 7.684 GiB
Name: acd-web01
ID: VN4G:PLDV:YQ34:B22N:MRET:AUNA:5IGA:DZ66:R6TW:T24B:XWNI:RB7K
@vagnerfonseca CentOS 6 and kernel 2.6.x hasn't been supported for a long time (last version of docker supporting that was Docker 1.7, which was released three years ago, and has reached end of life a long time ago.
If you want to run Docker, make sure to update to a currently supported release of CentOS 7
In my case (Manjaro Linux) this was cause by iptables simply not running at all. I had to add docker daemon option --iptables=false to disable any interaction with it.
I ran into this when my default firewalld zone was somehow changed from 'home' to 'public'. I resolved it by changing the default back to home, restarting firewalld, then flushing iptables:
firewall-cmd --set-default-zone=home
firewall-cmd --reload
systemctl restart firewalld
iptables -F
Adding my +1.
Running Arch Linux.
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 3
Server Version: 18.05.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.52-1-lts
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 11.72GiB
Name: mephisto
ID: BRRC:XMKV:WWAM:77LE:35HV:JGCX:P3MS:QZQX:3GOC:REIC:53Y4:ZEHL
Docker Root Dir: /home/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
I have iptables
installed, but not firewalld
.
Jul 31 13:24:56 mephisto docker[18190]: /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint proxy.service (c78e90b3b41c831de60a048d0dcfd73de325e91b2f3c048b27c848ced4972b43): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.17.0.2:8080 ! -i docker0: iptables: No chain/target/match by that name.
Only work around so far is to use --net=host
which is not necessarily desirable.
In my case (Manjaro Linux) this was cause by iptables simply not running at all. I had to add docker daemon option --iptables=false to disable any interaction with it.
iptables was causing me grief (on Manjaro, so ultimately I stopped it and following your example set iptables: false. This worked for me. (Had it failed I would next have tried net=host, or resorted to putting Docker into a virtual machine)
i meet this warning
11ๆ 18 18:42:43 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -o docker0 -j DOCKER' failed: iptables: No chain/target/match by that name.
11ๆ 18 18:42:43 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
11ๆ 18 18:42:43 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
11ๆ 18 18:42:43 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
11ๆ 18 18:42:52 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -p tcp -d 0/0 --dport 6379 -j DNAT --to-destination 172.17.0.2:6379 ! -i docker0' failed: iptables: No chain/target/match by that name.
11ๆ 18 18:42:52 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.2 --dport 6379 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
11ๆ 18 18:42:52 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -p tcp -s 172.17.0.2 -d 172.17.0.2 --dport 6379 -j MASQUERADE' failed: iptables: No chain/target/match by that name.
11ๆ 18 18:43:31 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -p tcp -d 0/0 --dport 27017 -j DNAT --to-destination 172.17.0.3:27017 ! -i docker0' failed: iptables: No chain/target/match by that name.
11ๆ 18 18:43:31 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.3 --dport 27017 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
11ๆ 18 18:43:31 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -p tcp -s 172.17.0.3 -d 172.17.0.3 --dport 27017 -j MASQUERADE' failed: iptables: No chain/target/match by that name.
Try creating the chain in iptables by running
iptables -N DOCKER
and if that doesn't work, try upgrading docker and iptables
I have solved the issue by typing
service iptables restart
and service docker restart
. Hope it helps.
Hi There.
Im runing a VM
INFO:
Static hostname: n/a
Transient hostname: aIP-OF-MY-MACHINE
Icon name: computer-vm
Chassis: vm
Machine ID: d4047bd0916d41d38b6b97ff7b5f2b3d
Boot ID: 61456d6912e24569985f0e9343bd8179
Virtualization: qemu
Operating System: openSUSE Tumbleweed
CPE OS Name: cpe:/o:opensuse:tumbleweed:20200817
Kernel: Linux 5.8.0-1-default
Architecture: x86-64
Docker Version:
Client:
Version: 19.03.12
API version: 1.40
Go version: go1.13.15
Git commit: 48a66213fe17
Built: Mon Aug 3 00:00:00 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 48a66213fe17
Built: Mon Aug 3 00:00:00 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.1.5_catatonit
GitCommit:
So, Im working arround a almost 1 week to solve this issue!
My MAIN issue is i have detected some random disconects to my VPS, disconects are afected on all ports lossing all acess!
I made some research and i find on ```/var/log/firewalld logs the issues that I will mention below
OUTPUT:
...
2020-09-15 01:21:23 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
2020-09-15 01:21:23 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2020-09-15 01:21:26 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
...
I already have executed this commands :
iptables -t filter -F
iptables -t filter -X
Then restart Docker Service using below comamnd
ip link delete docker0
systemctl restart docker
I have tried to make some this commands, and deinstalled docker to remove dockers configs...
without much sucess... ๐ ...
It is sad that this is happening! I have some work to do in a production environment
Most helpful comment
Exactly the same issue here as @shayts7 is describing. Workaround for now is to restart the daemon: