Ingress-nginx: nginx-ingress-controller pod in CrashLoopBackOff state when `externalIPs` and `hostNetwork=true` are set

Created on 10 Feb 2018  路  9Comments  路  Source: kubernetes/ingress-nginx

BUG REPORT:

When externalIPs and hostNetwork=true are set in helm, I see the nginx-ingress-controller pod in CrashLoopBackOff state due to the following error when actually there is no port collision of http port 80 on any node in the cluster:

Port 80 is already in use. Please check the flag --http-port

This issue is seen with both service type LoadBalancer and ClusterIP.

controller.service.externalIPs={10.10.97.200} and controller.hostNetwork=true are set below.

Service type LoadBalancer:

$ helm install --name my-nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.hostNetwork=true --set controller.service.externalIPs={10.10.97.200}

$ kubectl get pods | grep my-nginx-ingress-controller
my-nginx-ingress-controller-5b6c48c7dc-z5cvv        0/1       CrashLoopBackOff   2          41s

$ kubectl logs my-nginx-ingress-controller-5b6c48c7dc-z5cvv
I0210 01:36:39.123289       5 flags.go:159] Watching for ingress class: nginx
F0210 01:36:39.123645       5 main.go:59] Port 80 is already in use. Please check the flag --http-port
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.10.2
  Build:      git-fd7253a
  Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------

$ kubectl describe pod my-nginx-ingress-controller-5b6c48c7dc-z5cvv
  Warning  BackOff                1m (x6 over 2m)  kubelet, node1-we2d86faeb2  Back-off restarting failed container
  Warning  FailedSync             1m (x6 over 2m)  kubelet, node1-we2d86faeb2  Error syncing pod

But, I see no port collision of http port 80 on any node in the cluster:

$ sudo netstat -pan | grep :80
$

$ sudo lsof -i :80
$

Same issue with service type ClusterIP too:

$ helm install --name my-nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.hostNetwork=true --set controller.service.type="ClusterIP" --set controller.service.externalIPs={10.10.97.200}

$ kubectl get pods | grep my-nginx-ingress-controller
my-nginx-ingress-controller-5b6c48c7dc-6kv4p        0/1       CrashLoopBackOff   1          9s

$ kubectl logs my-nginx-ingress-controller-5b6c48c7dc-6kv4p
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.10.2
  Build:      git-fd7253a
  Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
I0210 01:43:46.083208       6 flags.go:159] Watching for ingress class: nginx
F0210 01:43:46.083536       6 main.go:59] Port 80 is already in use. Please check the flag --http-port

$ kubectl describe pod my-nginx-ingress-controller-5b6c48c7dc-6kv4p
  Warning  BackOff                59s (x6 over 1m)  kubelet, node1-we2d86faeb2  Back-off restarting failed container
  Warning  FailedSync             59s (x6 over 1m)  kubelet, node1-we2d86faeb2  Error syncing pod

But, I see no port collision of http port 80 on any node in the cluster:

$ sudo netstat -pan | grep :80
$

$ sudo lsof -i :80
$

NGINX Ingress controller version:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2

Kubernetes version:
1.8.4:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:
    Ubuntu Xenial VM.

  • OS:

$ cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel:
$ uname -a
Linux node1-m51b5b468be 4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

What happened:
See issue above.

What you expected to happen:
nginx ingress controller should work as expected when controller.service.externalIPs={10.10.97.200} and controller.hostNetwork=true are set.

How to reproduce it:
See steps above.

Most helpful comment

PR submitted to the kubernetes/charts repo about this info in the docs:

https://github.com/kubernetes/charts/pull/3694

All 9 comments

@vhosakot you cannot use hostNetwork=true if you already have another port running in port 80.

Closing. Please reopen if you have more questions.

@aledbf Thanks for the info. Yes, I do see kube-proxy also uses port 80 on one of the worker nodes and this causes port conflict for port 80 for the nginx ingress controller.

$ kubectl get pods -o wide | grep my-nginx-ingress-controller
my-nginx-ingress-controller-5fb9b7c986-lfdln      0/1     CrashLoopBackOff   4      3m     10.10.97.46    node1-we2d86faeb2

On worker node node1-we2d86faeb2, I see kube-proxy also using port 80 that causes port conflict for port 80 for nginx:

admin@ node1-we2d86faeb2:~$ sudo lsof -i :80
COMMAND    PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
kube-prox 2215 root   12u  IPv4 3190900      0t0  TCP 10.10.97.200:http (LISTEN)

admin@ node1-we2d86faeb2:~$ sudo netstat -pan | grep :80
tcp        0      0 10.10.97.200:80         0.0.0.0:*               LISTEN      2215/kube-proxy

So, does this mean the nginx ingress controller does not work when externalIPs and hostNetwork=true are set due to kube-proxy causing port conflict for port 80? If yes, I will submit a PR to add this info in the helm documentation in https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress in the kubernetes/charts repo. Let me know.

So, does this mean the nginx ingress controller does not work when externalIPs and hostNetwork=true are set due to kube-proxy causing port conflict for port 80?

Yes but this is not a kube-proxy issue. You cannot run two or more pods listening in the same port. Keep in mind externalIPs only exists in iptables (kube-proxy)

@aledbf Thanks. Agreed, multiple pods using the same port when hostNetwork=true cause port conflict. Using a different free port other than 80 for nginx's --http-port is a solution. Is there a way I can set nginx's --http-port to a value other than 80 using helm?

Per the helm documentaion in https://github.com/kubernetes/charts/blob/master/stable/nginx-ingress/README.md, if I set controller.service.targetPorts.http to a value other than 80, I see helm sets it as the service's targetPort in https://github.com/kubernetes/charts/blob/master/stable/nginx-ingress/templates/controller-service.yaml#L38 but not --http-port of nginx itself. How can I change nginx's --http-port using helm?

@vhosakot I think the helm chart do not provides a setting for custom ports

@aledbf I see, that I what I thought too. Okay, I'll add a note in the helm docs that when kube-proxy (iptables) is used, setting externalIPs and hostNetwork=true is an invalid configuration as kube-proxy causes port conflict for port 80 for nginx and helm does not allow the user to change nginx's --http-port. Thanks for the info!

PR submitted to the kubernetes/charts repo about this info in the docs:

https://github.com/kubernetes/charts/pull/3694

@vhosakot you cannot use hostNetwork=true if you already have another port running in port 80.

Hi,

I have a VirtualMin setup with all the good stuff: Apache, PHP, MySQL and others. The basic ports are already occupied by the services and I want to deploy a rancher V2.4.8 system (which I already did on port 9080) but I also want to use a Load Balancer (from now on: LB) Ingress w/ NginX. I'm not sure what the flag "hostNetwork" does but it seems to me that this should not be a problem because I should be able to configure my LB on what port I want without errors. On my rancher deployment command line I ran: "docker run -d --restart=always -p 9080:80 -p 9443:443 [...]"

Should I change the port 80 (from 9080:80) to some other port?
Why doesn't it complain about the 443 port?
Should I get rid of the Virtualmin installation and set it up as a container in Kubernetes/Rancher? -- this is tricky because I like the system's features such as Linux package upgrades and file editors. Since containers use volumes isn't it dangerous to bind "/" to "/"?

Was this page helpful?
0 / 5 - 0 ratings