Ingress-nginx: Ingress - Unable to connect to LB on port 80

Created on 4 Apr 2017  路  23Comments  路  Source: kubernetes/ingress-nginx

_From @microadam on April 6, 2016 13:9_

Following the examples mentioned here:

https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx

I get everything created and running, however when I try and connect to my host on its external IP address and port 80, I just get connection refused.

Even trying to curl http://127.0.0.1 on the machine itself returns connection refused, so its like the load balancing Pod is not actually listening on the hosts port 80.

Anyone have any suggestions as to how to go about debugging?

Thanks a lot

_Copied from original issue: kubernetes/contrib#717_

areingress

Most helpful comment

Yes, that is what I did. I used a static nodePort in the ingress service so that the external firewall in front of the k8s cluster could simply be configured to loadbalance to the same destination port on each node.

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
spec:
  type: LoadBalancer
  ports:
    - port: 443
      name: https
      nodePort: 32101
  selector:
    k8s-app: nginx-ingress-lb

All 23 comments

_From @aledbf on April 6, 2016 22:47_

@microadam how are you running the controller? can you post your yaml file?

_From @MariusVolkhart on April 6, 2016 23:3_

@aledbf I'm seeing the same symptoms. I'm using the yaml from https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/default/rc-default.yaml verbatim.

The echo service is working if I hit it straight on the NodePort. If it helps, when I do kubectl get ing I see that the address is not assigned. http://kubernetes.io/docs/user-guide/ingress/#simple-fanout seems to suggest that I should see an address

_From @aledbf on April 7, 2016 0:1_

@MariusVolkhart can you test if using aledbf/nginx-third-party:0.11 you see the same behavior (that image contains https://github.com/kubernetes/contrib/pull/707)

_From @MariusVolkhart on April 7, 2016 4:45_

After some digging, this doesn't appear to be container related.

Logs for the LB pod show main.go:102] failed to create client: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory, and a search led me to look into the admission-controllers flag. This led me to https://github.com/kubernetes/kubernetes/pull/22957 which suggests this is a problem with the default vSphere setup (that's what I've got).

Will try adding the flag and see if that fixes things.

_From @microadam on April 7, 2016 6:53_

I am using the same Yaml file as Marius (rc-default.yaml from the nginx echoheaders example folder).

I am running a different setup to Marius though and I do have addresses assigned to the ingress, so I think this maybe two different issues.

Not sure if this could be related to this: https://github.com/kubernetes/kubernetes/issues/23920 ? It seems like the same underlying issue

Also tried using aledbf/nginx-third-party:0.11 and have the same issue

_From @aledbf on April 10, 2016 22:31_

@microadam please use the latest version gcr.io/google_containers/nginx-ingress-controller:0.5

_From @microadam on April 11, 2016 14:50_

@aledbf Thats the image I have been using and having the issues with

_From @aledbf on April 11, 2016 15:29_

@microadam which kubernetes are you using and where (GCE,AWS,etc)?

Even trying to curl http://127.0.0.1

That will not work, the examples do not use hostNetwork: true. You need to check the announced IP in the Ingress rule/s (kubectl get ing)

_From @microadam on April 11, 2016 20:11_

@aledbf I am on k8s 1.2.1 currently. Running on a Linode VM. The setup I am using is done via ansible using this repo:

https://github.com/microadam/ansible-kubernetes-tinc

The IP address that I get from kubectl get ing is the public IP address of the machine that is running the ingress controller. Curling that from anywhere has the same issue...

Please let me know if there is anything else I can provide to help debug.

Assuming that I shouldn't expect to see anything listening on port 80 on the host machine when running netstat? (still trying to understand how various bits k8s work, especially the networking)

Thanks

_From @PaoloneM on November 3, 2016 22:24_

Same issue with k8s 1.4.3 on CoreOS cluster and gcr.io/google_containers/nginx-ingress-controller:0.8.3 ingress controller image.
I can reach nginx pod from inside the cluster curling its IP, but can't reach it from inside or outside the cluster using node IP.

_From @jleavers on November 22, 2016 9:16_

I have also seen this issue testing with k8s 1.4.4 + gcr.io/google_containers/nginx-ingress-controller:0.8.3, running on an Ubuntu cluster.

The nginx pod IP responds and forwards to echoheaders as expected:

# kubectl describe po -l name=nginx-ingress-lb | grep IP
IP:     10.44.0.1

# curl 10.44.0.1/foo -H 'Host: foo.bar.com'
CLIENT VALUES:
client_address=10.44.0.1
command=GET
real path=/foo
query=nil
request_version=1.1
request_uri=http://foo.bar.com:8080/foo

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
connection=close
host=foo.bar.com
user-agent=curl/7.47.0
x-forwarded-for=10.32.0.1
x-forwarded-host=foo.bar.com
x-forwarded-port=80
x-forwarded-proto=http
x-real-ip=10.32.0.1
BODY:
-no body in request-

The node IP does not respond, even from another host on the same network or from the node itself:

# kubectl describe po -l name=nginx-ingress-lb | grep Node
Node:       k8s-node-3/10.1.2.3

# kubectl describe ing
Name:           echomap
Namespace:      default
Address:        10.1.2.3
Default backend:    default-http-backend:80 (<none>)
Rules:
  Host      Path    Backends
  ----      ----    --------
  foo.bar.com
            /foo    echoheaders-x:80 (<none>)
  bar.baz.com
            /bar    echoheaders-y:80 (<none>)
            /foo    echoheaders-x:80 (<none>)
Annotations:
No events.

# curl 10.1.2.3/foo -H 'Host: foo.bar.com'
curl: (7) Failed to connect to 10.1.2.3 port 80: Connection refused

_From @jleavers on November 22, 2016 19:26_

I suspect this is a known issue, documented here: https://github.com/kubernetes/kubernetes/issues/35875

I'm having the same problem. Do you know about any workarounds for this? Putting a service in front?

Yes, that is what I did. I used a static nodePort in the ingress service so that the external firewall in front of the k8s cluster could simply be configured to loadbalance to the same destination port on each node.

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
spec:
  type: LoadBalancer
  ports:
    - port: 443
      name: https
      nodePort: 32101
  selector:
    k8s-app: nginx-ingress-lb

dear experts, this issue is open for a year. Do we have fix or alternative now?
thanks for sharing

nodePort: 32101

@jleavers where did you take the 32101 from. I am trying to find a workaround until k8s 1.7 can be be used. I managed to expose the ingress through a static IP following https://github.com/kubernetes/ingress/tree/master/examples/static-ip/nginx
but my service file would look like this:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress-lb
  annotations:
    service.beta.kubernetes.io/external-traffic: OnlyLocal
  labels:
    app: nginx-ingress-lb
spec:
  type: LoadBalancer
  ports:
  - port: 80
    name: http
    targetPort: 80
  - port: 443
    name: https
    targetPort: 443
  selector:
    k8s-app: nginx-ingress-controller

this also requires starting the controller once the service has an ip and using --publish-service=$(POD_NAMESPACE)/nginx-ingress-lbas an extra arg. Right now it only works for port 80 for me.

Edit: got it to work after specifying separate tls secrets for each rule in the ingress. It appears the default tls is not applied otherwise.

@ensonic I just decided to use ports from 32101 upwards for static services, so an additional ingress might have a service on 32102, etc. The logic was that the default range is 30000-32767 so it was hopefully unlikely there would be a conflict.

This particular setup uses a wildcard certificate so each time a service is added we add an entry to the hosts section as well as a rule.

I am now in kubernetes 1.7.8 (GKE) and this is still broken. Sadly the workaround with the service does not work anymore as well (https://github.com/kubernetes/ingress-nginx/issues/348). What do I have to do to make sure kubernetes exposes the ports?

Or if you believe it is actually fixed, how can I verify that it is? I am getting a
'....com port 443: Connection timed out' when I am curling my site (with host headers set). I don't see anything in the nginx logs when trying to connect (nginx-ingress-controller started with --v=5).

I think we still need the --publish-service. The reason why that failed all of a sudden was https://github.com/kubernetes/kubernetes/issues/39420

I too am running the echo example, and running into the same problem. Here's what I'm seeing (with hostnames changed), below. If I use the nginx-ingress-controller pod IP (10.36.0.1), I can use the example's rules to get to the echo container, as expected:

% curl 10.36.0.1/foo -H 'Host: ourhost.foo.com'
CLIENT VALUES:
client_address=10.36.0.1
command=GET
real path=/foo
query=nil
request_version=1.1
request_uri=http://ourhost.foo.com:8080/foo

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
connection=close
host=ourhost.foo.com
user-agent=curl/7.29.0
x-forwarded-for=10.32.0.1
x-forwarded-host=ourhost.foo.com
x-forwarded-port=80
x-forwarded-proto=http
x-original-uri=/foo
x-real-ip=10.32.0.1
x-scheme=http
BODY:
-no body in request-

But if I try the hostname, which is of course what we need, it doesn鈥檛 work:

% curl ourhost.foo.com/foo -H 'Host: ourhost.foo.com'
curl: (7) Failed connect to ourhost.foo.com:80; Connection refused
% curl ourhost.foo.com/foo
curl: (7) Failed connect to ourhost.foo.com:80; Connection refused

So, it appears that ingress/nginx is working inside the cluster, but not available outside the cluster. How can I fix this?

Just checking in to note that I'm also seeing this. My situation is a little bit different as I had a working 1.7.2 bare-metal (kubelet) cluster but wanted to update to 1.8.4 and have run into this problem now. Port 80 just isn't being used -- it was driving me mad because the services had pods, but the ingresses just didn't seem to be able to find them. Then I tracked the problem down through RBAC issues w/ various components down to kube-lego not being able to validate domains -- which lead me to realize that port 80 wasn't open for some reason.

I did find a way around it, though I haven't stopped to think through the repercussions just yet -- I set hostNetwork:true on the pod in the ingress-nginx deployment, and am actually using LoadBalancer for the service. The service now looks like this:

Name:           ingress-nginx
Namespace:      ingress-nginx
Labels:         <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"ingress-nginx","namespace":"ingress-nginx"},"spec":{"ports":[{"name":"http","p...
Selector:       app=ingress-nginx
Type:           LoadBalancer
IP:         10.3.146.152
Port:           http    80/TCP
NodePort:       http    31293/TCP
Endpoints:      <server external IP>:80
Port:           https   443/TCP
NodePort:       https   30589/TCP
Endpoints:      <server external ip>:443
Session Affinity:   None
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason  Message
  --------- --------    -----   ----            -------------   --------    ------  -------
  4m        4m      1   service-controller          Normal      Type    NodePort -> LoadBalancer

Could anyone enlighten me as to whether what I just did is wrong and if so why? I think this would prevent you from using more than one ingress controller, but other than that, the ingress controller is theoretically supposed to sit between you and the big bad world right? so I don't think I've made it any more insecure than normal?

[UPDATE] - Everything's actually gone back to working for me, configuration is being loaded for TCP/UDP services (I used the deployment guide verbatim basically) I don't see new ports that I didn't ask to be opened.

Hi, I am also using the guide https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md but NOT using the type:LoadBalancer but instead type:NodePort, which does'nt work for me in Azure. The Service seems to be created with endpoints

Name:                     ingress-nginx
Namespace:                ingress-nginx
Labels:                   app=ingress-nginx
Annotations:              <none>
Selector:                 app=ingress-nginx
Type:                     NodePort
IP:                       10.3.209.239
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32287/TCP
Endpoints:                10.0.16.4:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30375/TCP
Endpoints:                10.0.16.4:443
Session Affinity:         None
External Traffic Policy:  Local
Events:                   <none>

I then created an LB within Azure and created a TCP Probe for port 32287 and selected my worker nodes as backend pool but the probe won't pass. I also checked and all necessary ports etc are open within the ASG of the worker nodes.

One interesting observation, I SSHed into master, then did curl http://node-ip:32287 where node-ip is the IP of node where the nginx-controller pod is running (I only have 1 such pod) and it timed out. So wondering if 80 is actually exposed on 32287 ?

The nginx controller pod seems to be up and running and so the default backend

Creating a service of type:LoadBalancer works, but we want to assign multiple public IPs to load balancer and hence the type:LoadBalancer won't work as it only allows us to specify 1 IP as LoadBalancerIP

Was this page helpful?
0 / 5 - 0 ratings

Related issues

smeruelo picture smeruelo  路  3Comments

kfox1111 picture kfox1111  路  3Comments

cabrinoob picture cabrinoob  路  3Comments

geek876 picture geek876  路  3Comments

bashofmann picture bashofmann  路  3Comments