Describe the bug
Centos7.6 & k3s fails to install trafiec; dns does not work; 10.43.0.0 network is not pingable.
[root@k3s ~]# kubectl logs helm-install-traefik-p59kr --namespace kube-system
+ export HELM_HOST=127.0.0.1:44134
+ HELM_HOST=127.0.0.1:44134
+ + helm init --client-only
tiller --listen=127.0.0.1:44134 --storage=secret
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
[main] 2019/03/19 15:35:04 Starting Tiller v2.12.3 (tls=false)
[main] 2019/03/19 15:35:04 GRPC listening on 127.0.0.1:44134
[main] 2019/03/19 15:35:04 Probes listening on :44135
[main] 2019/03/19 15:35:04 Storage driver is Secret
[main] 2019/03/19 15:35:04 Max history per release is 0
Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp: lookup kubernetes-charts.storage.googleapis.com on 10.43.0.10:53: read udp 10.42.0.62:48428->10.43.0.10:53: i/o timeout
[root@k3s ~]# kubectl get pods --all-namespaces=true -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-7748f7f6df-pznr2 1/1 Running 0 106s 10.42.0.63 k3s <none> <none>
kube-system helm-install-traefik-p59kr 0/1 CrashLoopBackOff 2 107s 10.42.0.62 k3s <none> <none>
[root@k3s ~]# kubectl run -it --rm --restart=Never busybox --image=busybox sh
If you don't see a command prompt, try pressing enter.
/ # ping 10.43.0.10
PING 10.43.0.10 (10.43.0.10): 56 data bytes
^C
--- 10.43.0.10 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ #
I see nothing in the server debug log saying there is a problem.
To Reproduce
Run on centos7.6
Expected behavior
Traefik comes up; dns works.
I faced same issue but able to resolve this buy below approach method.
disable the selinux / firewalld
systemctl stop firewalld
systemctl disable firewalld
systemctl unmask firewalld
yum install iptables-services
systemctl enable iptables
systemctl start iptables
then try k3s all works.
Didn't help. I happen to have a nanoneopi running debian 9.6 that I can compare against, and found that the CNI nat rules are not being created for some reason.
ie, this:
root@nanopineo2:~# iptables -L -v -n -t nat
Chain PREROUTING (policy ACCEPT 2 packets, 571 bytes)
pkts bytes target prot opt in out source destination
90 5470 CNI-HOSTPORT-DNAT all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
1528K 236M KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain INPUT (policy ACCEPT 2 packets, 571 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 8 packets, 480 bytes)
pkts bytes target prot opt in out source destination
720K 43M CNI-HOSTPORT-DNAT all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
1193K 73M KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain POSTROUTING (policy ACCEPT 8 packets, 480 bytes)
pkts bytes target prot opt in out source destination
1309K 80M CNI-HOSTPORT-MASQ all -- * * 0.0.0.0/0 0.0.0.0/0 /* CNI portfwd requiring masquerade */
1252K 76M KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
578K 35M RETURN all -- * * 10.42.0.0/16 10.42.0.0/16
204 12967 MASQUERADE all -- * * 10.42.0.0/16 !224.0.0.0/4
0 0 RETURN all -- * * !10.42.0.0/16 10.42.0.0/24
0 0 MASQUERADE all -- * * !10.42.0.0/16 10.42.0.0/16
Chain CNI-DN-6e28509cce5633e1d3cd8 (1 references)
pkts bytes target prot opt in out source destination
0 0 CNI-HOSTPORT-SETMARK tcp -- * * 10.42.0.4 0.0.0.0/0 tcp dpt:80
57639 3458K CNI-HOSTPORT-SETMARK tcp -- * * 127.0.0.1 0.0.0.0/0 tcp dpt:80
57639 3458K DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:10.42.0.4:80
0 0 CNI-HOSTPORT-SETMARK tcp -- * * 10.42.0.4 0.0.0.0/0 tcp dpt:443
0 0 CNI-HOSTPORT-SETMARK tcp -- * * 127.0.0.1 0.0.0.0/0 tcp dpt:443
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 to:10.42.0.4:443
Chain CNI-HOSTPORT-DNAT (2 references)
pkts bytes target prot opt in out source destination
57639 3458K CNI-DN-6e28509cce5633e1d3cd8 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* dnat name: "cbr0" id: "af756018d260655be752b788f6c26b3b374ab94f18692f9893551cbd06359e24" */ multiport dports 80,443
Chain CNI-HOSTPORT-MASQ (1 references)
pkts bytes target prot opt in out source destination
57639 3458K MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 mark match 0x2000/0x2000
Chain CNI-HOSTPORT-SETMARK (4 references)
pkts bytes target prot opt in out source destination
57639 3458K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* CNI portfwd masquerade mark */ MARK or 0x2000
Chain KUBE-FW-IKNZCF5XJQBTG3KZ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/traefik:https loadbalancer IP */
0 0 KUBE-SVC-IKNZCF5XJQBTG3KZ all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/traefik:https loadbalancer IP */
0 0 KUBE-MARK-DROP all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/traefik:https loadbalancer IP */
on the centos box, we get this:
[root@k3s ~]# iptables -t nat -L -v -n
Chain PREROUTING (policy ACCEPT 14 packets, 942 bytes)
pkts bytes target prot opt in out source destination
578 45217 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain INPUT (policy ACCEPT 7 packets, 350 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 177 packets, 10617 bytes)
pkts bytes target prot opt in out source destination
3274 199K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain POSTROUTING (policy ACCEPT 177 packets, 10617 bytes)
pkts bytes target prot opt in out source destination
3611 231K KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
116 10426 RETURN all -- * * 10.42.0.0/16 10.42.0.0/16
248 22744 MASQUERADE all -- * * 10.42.0.0/16 !224.0.0.0/4
0 0 RETURN all -- * * !10.42.0.0/16 10.42.0.0/24
0 0 MASQUERADE all -- * * !10.42.0.0/16 10.42.0.0/16
Chain KUBE-MARK-DROP (0 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-MARK-MASQ (8 references)
so that's why it's not working.. now found out why the rules are not getting created.
actually I have the same problem and using iptables instead of firewalld does not help at all.
also selinux runs in permissive mode, so no enforcing rules enabled.
looks like after disabling it, i needed to clear old rules via iptables -F and iptables -t nat -F
Well, my problem is simple.. The network I am on does not allow querying the DNS server at 1.1.1.1.
[root@k3s ~]# ping 1.1.1.1 -c 1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=2.75 ms
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.752/2.752/2.752/0.000 ms
[root@k3s ~]# host www.nersc.gov 1.1.1.1
;; connection timed out; no servers could be reached
[root@k3s ~]# host www.nersc.gov 8.8.8.8
Using domain server:
Name: 8.8.8.8
Address: 8.8.8.8#53
Aliases:
www.nersc.gov is an alias for www5.nersc.gov.
www5.nersc.gov has address 128.55.209.20
www5.nersc.gov has IPv6 address 2620:0:28b0:d1::14
[root@k3s ~]#
and because that IP address is hard coded in right now, it makes it difficult to use.. might have to add a NAT rule in the gateway node to masq that IP address.
doing this:
iptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to 192.168.85.254:53
on the gateway nat node fixed it for now.
so, check to see 1.1.1.1 works as a DNS service from your node.
[root@k3s ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-7748f7f6df-rdrjx 1/1 Running 0 2m40s 10.42.0.92 k3s <none> <none>
kube-system helm-install-traefik-prrsg 0/1 Completed 0 2m40s 10.42.0.91 k3s <none> <none>
kube-system svclb-traefik-586fdcf757-7gcdk 2/2 Running 0 2m27s 10.42.0.94 k3s <none> <none>
kube-system traefik-7b6bd6cbf6-2jhmk 1/1 Running 0 2m27s 10.42.0.93 k3s <none> <none>
[root@k3s ~]#
btw, do 'link -s k3s kubectl' and you now have kubectl as a command (same thing with crictl)
DNS patches fixes this problem.
Closing.
hello, I also have this problem in debian10, How do you solve this problem?