Version:
v0.9.1
Describe the bug
Cannot access a nginx service from its NodePort
To Reproduce
k3scluster@node:$ cat > manifest <<EOF
> apiVersion: apps/v1
> kind: Deployment
> metadata:
> name: nginx-deployment
> spec:
> selector:
> matchLabels:
> app: nginx-app
> replicas: 1
> template:
> metadata:
> labels:
> app: nginx-app
> spec:
> containers:
> - name: nginx
> image: nginx:1.13.12
> ports:
> - containerPort: 80
> ---
> apiVersion: v1
> kind: Service
> metadata:
> labels:
> app: nginx-app
> name: nginx-svc
> namespace: default
> spec:
> type: NodePort # use ClusterIP as type here
> ports:
> - port: 80
> selector:
> app: nginx-app
> EOF
k3scluster@node:$ kubectl apply -f manifest
deployment.apps/nginx-deployment created
service/nginx-svc created
k3scluster@node:$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 4m41s
nginx-svc NodePort 10.43.112.32 <none> 80:32136/TCP 59s
k3scluster@node:$ kubectl describe svc nginx-svc
Name: nginx-svc
Namespace: default
Labels: app=nginx-app
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx-app"},"name":"nginx-svc","namespace":"default"},"s...
Selector: app=nginx-app
Type: NodePort
IP: 10.43.112.32
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32136/TCP
Endpoints: 10.42.0.6:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
k3scluster@node:$ curl localhost:32136
this just hangs forever and doesn't return anything.
Expected behavior
The default Nginx index.html should be returned on the curl command
Actual behavior
curl times out
Additional context
Running on RPi 3B with Raspbian Buster
@cjdcordeiro With service of Type: NodePort, a port between range 30000-32767 is allocated on the node which can be accessed from outside, using node's public ip:nodeport.
In your example:
curl http://<public ip>:32136
And can be access locally using ClusterIP:port
curl http://10.43.112.32
Hope this helps. Please verify.
@ShylajaDevadiga yes I know, that's what the command I've posted in the ticket description above, and doesn't work.
My best guess is that this has something to do with iptables-nft, but I'm not a network expert so I'd rather have confirmation and/or a workaround from the developers
iptables needs to be in legacy mode.
iptables needs to be in legacy mode.
Thanks for your hint, but I've switched to legacy iptables
$ iptables -V
iptables v1.8.2 (legacy)
Still no lucks. I'm able to browse http://
Just tried to reproduce / verify. Used the manifest exactly as pasted above. No issues; works without any problems. curl works from outside cluster with
Setup/versions:
Ubuntu 19.10 on RPi 4
iptables v1.8.3 (legacy)
k3s version v1.0.1 (e94a3c60)
I have the same problem.
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 10m
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 10m
kube-system metrics-server ClusterIP 10.43.216.4 <none> 443/TCP 10m
kube-system traefik-prometheus ClusterIP 10.43.77.39 <none> 9100/TCP 9m45s
kube-system traefik LoadBalancer 10.43.13.81 10.0.1.114 80:30869/TCP,443:32370/TCP 9m44s
default nginx NodePort 10.43.21.26 <none> 80:30400/TCP 4m54s
only curl on the nginx host in ssh work.
I have scan with nmap the two node and i see for both of them: 30400/tcp filtered gs-realtime
With putty and tunnel config to the nginx host machine it's work.
I work 14 hours on that without success ...
And i don't understand how client know how to go on the specific HOST ip who's running the nginx pods ...
Raspbian GNU/Linux 10 (buster)"
iptables v1.8.2 (legacy)
k3s version v1.17.2+k3s1 (cdab19b0)
Help would be appreciated.
(sorry for my bad english)
It was very strange, I managed to fix it by:
Then it worked.
Yes after i restart all the cluster it's works.
This doc can help for understand how works node-port and other: https://kubernetes.io/docs/tutorials/services/source-ip/
Thanks !
@erikwilson any improvements to be made here? Does our check script check for iptables in legacy mode?
Hi,
~I'm also stumbled across this problem but my setup is a bit different. I'm using K3d and created a nodeport service. Looking into the iptables, I can see that in the KUBE-EXTERNAL-SERVICES chain the port is blocked. It has a notice that the service has no endpoints. In my example it's the mosquitto service:~
~Any suggestions, if this is a bug of K3s or a setup problem?~
/ # iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes externally-visible service portals */
Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes service portals */
ACCEPT all -- 10.42.0.0/16 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 10.42.0.0/16
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes service portals */
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 /* mqtt/mosquitto-service has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:31883 reject-with icmp-port-unreachable
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP all -- !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (3 references)
target prot opt source destination
REJECT tcp -- 0.0.0.0/0 10.43.135.148 /* mqtt/mosquitto-service has no endpoints */ tcp dpt:8883 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 172.20.0.2 /* mqtt/mosquitto-service has no endpoints */ tcp dpt:8883 reject-with icmp-port-unreachable
Edit Actually, it seems that my pods were crashing, which caused kubernetes to drop the endpoint. So K3s correctly added those ip table rules.
Most helpful comment
It was very strange, I managed to fix it by:
Then it worked.