K3s: Traefik ingress controller dosen't listen on ports 80, 443, and 8080 on the host, but ramdon nodeport

Created on 14 Feb 2020  路  19Comments  路  Source: k3s-io/k3s

Version:
k3s version v1.0.0 (18bd921c)

Describe the bug
Traefik ingress controller dosen't listen on ports 80, 443, and 8080 on the host, but ramdon nodeport

To Reproduce
install the v1.0.0

Expected behavior
The Traefik ingress controller will use ports 80, 443, and 8080 on the host

Actual behavior
Traefik ingress controller listen on nodeport(like 30579)
kubectl get svc --namespace=kube_system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 97m
metrics-server ClusterIP 10.43.162.222 443/TCP 97m
traefik LoadBalancer 10.43.20.185 172.16.24.138 80:30579/TCP,443:30051/TCP,8080:32535/TCP 96m

Unscheduled kinquestion

Most helpful comment

In our case we're based on Ubuntu 20.04 and using the 'legacy' iptables and the various KUBE* chains and rules are in place. Despite all that its unclear why exposed services cannot be reached from the public IP addresses of the workers.

I suspect the original reporter on this issue has the same problem as we're seeing but came to the same conclusion as we did when the expected behaviour wasn't observed.

Like the original reporter we can only connect to the exposed services via the random port numbers from outside the cluster not the well-known service ports.

All 19 comments

traefik LoadBalancer 10.43.20.185 172.16.24.138 80:30579/TCP,443:30051/TCP,8080:32535/TCP 96m

The traefik service is listening on ports 80, 443 and 8080. You should be able to access 172.16.24.138:80, 172.16.24.138:443 and 172.16.24.138:8080.

@dabio no, I tried... only 30579 work for http...
172.16.24.138 is the host ip

Are your iptables rules broken or something? What OS is this on?

ubuntu 18.04, a brand new ECS in alibaba cloud(like aws ec2)

Is this issue reproducible? What about on another platform? This should be working.

@davidnuzik can you tell me why this should be working?
I found that the k3s are not listening on 80, 443, so how can this work?

Based on our suite of tests against Ubuntu 18.04 and CentOS7 this should work. I would review firewall rules, etc. You mentioned aws ec2 instances -- the security group has been set up correctly, etc?

I installed k3s again with no-traefik option, and install nginx-ingress helm with nodeports 30080 and 30443.

nginx-ingress-controller        LoadBalancer   10.43.101.91    172.16.55.78   80:30080/TCP,443:30443/TCP   9h

I found that k3s-serve is listening on port 30080 but not 80.

root@testing-k3s-master:~# lsof -i :30080
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
k3s-serve 2975 root  255u  IPv6  35407      0t0  TCP *:30080 (LISTEN)

But it still can visit the ingress by port 80, so how the k3s achieve this? by iptables?

Confirming this too, the external interface is listening on the random port number not the servce port number.

$ kubectl get svc --namespace=kube_system
NAME                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
kube-dns             ClusterIP      10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP       19h
metrics-server       ClusterIP      10.43.56.99    <none>        443/TCP                      19h
traefik-prometheus   ClusterIP      10.43.218.74   <none>        9100/TCP                     19h
traefik              LoadBalancer   10.43.2.147    10.127.0.1    80:30046/TCP,443:30259/TCP   19h

On the master:

# ss -tnlp sport = 443
State           Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port           Process           
root@elloe01:~# ss -tnlp sport = 30259
State            Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port          Process           
LISTEN           0                4096                                   *:30259                                *:*              users:(("k3s-server",pid=16866,fd=284))

On one of the workers:

# ss -tnlp sport = 443
State           Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port           Process           
root@innovation00:~# ss -tnlp sport = 30259
State            Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port          Process           
LISTEN           0                4096                                   *:30259                                *:*              users:(("k3s-agent",pid=27726,fd=179))

Yes, this is how kubernetes (specifically kube-proxy) works. The container listens on a random node port, and the control plane uses iptables rules to masquerade traffic from the loadbalancer address and port to the appropriate node port.

brandond@seago:~$ kubectl get svc --namespace=traefik
NAME      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                     AGE
traefik   LoadBalancer   10.43.25.120   10.0.3.80     80:31417/TCP,443:31119/TCP,9000:31462/TCP   57d
brandond@seago:~$ sudo iptables -vnL -t nat | grep traefik/traefik:websecure
    0     0 KUBE-XLB-LODJXQNF3DWSNB7B  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* traefik/traefik:websecure loadbalancer IP */
    0     0 KUBE-XLB-LODJXQNF3DWSNB7B  all  --  *      *       10.0.3.80            0.0.0.0/0            /* traefik/traefik:websecure loadbalancer IP */
    0     0 KUBE-MARK-DROP  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* traefik/traefik:websecure loadbalancer IP */
    0     0 KUBE-MARK-MASQ  tcp  --  *      *       127.0.0.0/8          0.0.0.0/0            /* traefik/traefik:websecure */ tcp dpt:31119
    0     0 KUBE-XLB-LODJXQNF3DWSNB7B  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* traefik/traefik:websecure */ tcp dpt:31119
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.42.0.0/16         10.43.25.120         /* traefik/traefik:websecure cluster IP */ tcp dpt:443
    0     0 KUBE-SVC-LODJXQNF3DWSNB7B  tcp  --  *      *       0.0.0.0/0            10.43.25.120         /* traefik/traefik:websecure cluster IP */ tcp dpt:443
    0     0 KUBE-FW-LODJXQNF3DWSNB7B  tcp  --  *      *       0.0.0.0/0            10.0.3.80            /* traefik/traefik:websecure loadbalancer IP */ tcp dpt:443
    0     0 KUBE-MARK-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* masquerade LOCAL traffic for traefik/traefik:websecure LB IP */ ADDRTYPE match src-type LOCAL
    0     0 KUBE-SVC-LODJXQNF3DWSNB7B  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* route LOCAL traffic for traefik/traefik:websecure LB IP to service chain */ ADDRTYPE match src-type LOCAL
    0     0 KUBE-MARK-DROP  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* traefik/traefik:websecure has no local endpoints */

If this isn't working for you, then you've probably got something wrong with your iptables configuration - such as running on a distro that uses iptables-nft and not installing the iptables-legacy tools.

In our case we're based on Ubuntu 20.04 and using the 'legacy' iptables and the various KUBE* chains and rules are in place. Despite all that its unclear why exposed services cannot be reached from the public IP addresses of the workers.

I suspect the original reporter on this issue has the same problem as we're seeing but came to the same conclusion as we did when the expected behaviour wasn't observed.

Like the original reporter we can only connect to the exposed services via the random port numbers from outside the cluster not the well-known service ports.

I think I've figured out our issue.

Our master(s) are deployed on the office LAN. Workers are in remote data-centres. Because the cluster needs to be on its own sub-net to avoid PNAT/routing issues we've created a Wireguard VPN that the cluster uses on 10.127.0.0/16 with the master on 10.127.0.1.

On the master we can attach to traefik using HTTP (tested using telnet) but from the workers that fails (strange since the workers can reach the master via the 10.127.0.0/16 sub-net).

# root@innovation00:~# nmap 10.127.0.1
Starting Nmap 7.80 ( https://nmap.org ) at 2020-03-27 07:44 UTC
Nmap scan report for elloe01.k3s (10.127.0.1)
Host is up (0.027s latency).
Not shown: 997 closed ports
PORT    STATE    SERVICE
22/tcp  open     ssh
80/tcp  filtered http
443/tcp filtered https

However, it is clear our issue is to do with our 'IoT' edge network requirements rather than a problem with traefik.

I'm a new learner and I don't quite understand. I have previously set nginx listening on 80 and 443. Now I'm trying to install k3s. Does The Traefik ingress controller will use ports 80, 443, and 8080 on the host mean they are not compatible?
It's really puzzling. Neither of them gave an error. curl 127.0.0.1 simply stucked. ss -tlnp showed nginx is listening on 0.0.0.0:80 normally. curl <myip> still returned a response even after systemctl stop k3s. All of them persisted only after reboot, kill and restart nginx or k3s wouldn't help.

if you run traefik with root, it can bind 80 and 443
if you want to run traefik container with non root user, and bind to 80, it's not easy to do that. see this issue

have the same issue on fresh install (kubeadm way) on ubuntu 20.04.1 LTS

traefik is using normal ports 80, 443, 8080 forwarded to random ports
image

hosts are (very well configured, no mistake)
image

access normal ports from inside doesn't work
image

those normal ports are closed from outside
image

iptables modules and configuration are well loaded (no mistake):
image

even though everything is set correctly, i can't access traefik ports on my cluster. hope community could give us a clue on that
it has nothing to do with k8s i think it has something to do with iptables

Try to setup hostPort for web and websecure ports. That will create dnat rules in iptables.

Try to setup hostPort for web and websecure ports. That will create dnat rules in iptables.

@fox-md you can't simply setup hostPort... am using a service with NodePort, setting hostport need to be in pods not in service ... and it's not working in both (i've tried)

Thank you.

@fox-md you can't simply setup hostPort... am using a service with NodePort, setting hostport need to be in pods not in service ... and it's not working in both (i've tried)

Thank you.

Hi @magixus,
My understanding is that putting hostPort into the picture creates dnat rules that help requests against ports 80 and 443 reach ingress pod.
[root@kubeworker01 ~]# iptables-save | grep "CNI-DN" | grep "to-destination"
-A CNI-DN-051e5bdafb630d2c22b59 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.36.0.4:8000
-A CNI-DN-051e5bdafb630d2c22b59 -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.36.0.4:8443
Without having hostPort, I do not quite understand what would make iptables create nat rules to route requests to ingress-controller service.

Hi @magixus,
My understanding is that putting hostPort into the picture creates dnat rules that help requests against ports 80 and 443 reach ingress pod.
[root@kubeworker01 ~]# iptables-save | grep "CNI-DN" | grep "to-destination"
-A CNI-DN-051e5bdafb630d2c22b59 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.36.0.4:8000
-A CNI-DN-051e5bdafb630d2c22b59 -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.36.0.4:8443
Without having hostPort, I do not quite understand what would make iptables create nat rules to route requests to ingress-controller service.

To be honest I didn't understand your replay, but I can tell that I've tried setting hostPort as well a long with my configurations and it didn't work. no DNAT routing has been created unfortunately.
any way I saved my journey with a startup script as following

#!/bin/bash

#sleep 2m
TRAEFIK_IP=$(kubectl get pods -n kube-system -o wide | grep traefik | awk '{print $6}')


# check IP PREROUTING 
PREROUTING_IP=$(iptables -t nat -vnL PREROUTING --line-numbers | sed '/^num\|^$\|^Chain/d' | wc -l)
if [ "$PREROUTING_IP" == 4 ] ; then
    # update IP DNAT prerouting rules
    iptables -R PREROUTING 3 -t nat -i ens160 -p tcp --dport 80 -j DNAT --to $TRAEFIK_IP:80
    iptables -R PREROUTING 4 -t nat -i ens160 -p tcp --dport 443 -j DNAT --to $TRAEFIK_IP:443
elif [ "$PREROUTING_IP" == 2 ]; then 
    # create DNAT prerouting rules if they don't existe
    iptables -A PREROUTING -t nat -i ens160 -p tcp --dport 80 -j DNAT --to $TRAEFIK_IP:80
    iptables -A PREROUTING -t nat -i ens160 -p tcp --dport 443 -j DNAT --to $TRAEFIK_IP:443
fi

# check IP FORWARD 
FORWARD_IP=$(iptables -vnL FORWARD --line-numbers | sed '/^num\|^$\|^Chain/d' | wc -l)
if [ "$FORWARD_IP" == 12 ] ; then
    # update IP DNAT FORWARD rules
    iptables -R FORWARD 11 -p tcp -d $TRAEFIK_IP --dport 80 -j ACCEPT
    iptables -R FORWARD 12 -p tcp -d $TRAEFIK_IP --dport 443 -j ACCEPT
elif [ "$FORWARD_IP" == 10 ]; then 
    # create DNAT FORWARD rules if they don't existe
    iptables -A FORWARD -p tcp -d $TRAEFIK_IP --dport 80 -j ACCEPT
    iptables -A FORWARD -p tcp -d $TRAEFIK_IP --dport 443 -j ACCEPT
fi

What the script is doing is basically checking any PREROUTING and FORWARD rules and update or create them accordingly.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

davidnuzik picture davidnuzik  路  3Comments

joakimr-axis picture joakimr-axis  路  3Comments

seanmalloy picture seanmalloy  路  3Comments

theonewolf picture theonewolf  路  3Comments

e-nikolov picture e-nikolov  路  3Comments