Origin: cluster up: router won't start on Ubuntu 16.10 (Yakkety)

Created on 2 Nov 2016  路  5Comments  路  Source: openshift/origin

When trying to start a local OpenShift cluster using oc cluster up under Ubuntu 16.10 the router fails to start as it can't bind to ports 80 and 443. No process is using the ports, so it is probably some permissions issue.

Version
oc v1.3.1
kubernetes v1.3.0+52492b4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.0.37:8443
openshift v1.3.1
kubernetes v1.3.0+52492b4
Steps To Reproduce
  1. oc cluster up
  2. oc login -u system:admin
  3. oc project default
  4. oc get pods

    NAME                      READY     STATUS    RESTARTS   AGE
    docker-registry-1-704ur   1/1       Running   0          2m
    router-1-deploy           1/1       Running   0          2m
    router-1-sj4yo            0/1       Running   0          2m
    
  5. oc logs router-1-sj4yo

    I1102 12:26:52.247484       1 router.go:161] Router is including routes in all namespaces
    E1102 12:26:52.593041       1 ratelimiter.go:52] error reloading router: exit status 1
    [ALERT] 306/122652 (26) : Starting frontend public: cannot bind socket [0.0.0.0:80]
    [ALERT] 306/122652 (26) : Starting frontend public_ssl: cannot bind socket [0.0.0.0:443]
    
Current Result

OpenShift cluster works as expected, only the router fails to start.

Expected Result

Router should start and accept requests on ports 80 and 443.

Additional Information

I was able to reproduce this in a clean VM.

No process is listening on port 80 or 443.

$ netstat -tulp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:hostmon         0.0.0.0:*               LISTEN      -                   
tcp        0      0 localhost:domain        0.0.0.0:*               LISTEN      -                   
tcp        0      0 0.0.0.0:ssh             0.0.0.0:*               LISTEN      -                   
tcp6       0      0 [::]:hostmon            [::]:*                  LISTEN      -                   
tcp6       0      0 [::]:ssh                [::]:*                  LISTEN      -                   
udp        0      0 localhost:domain        0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:bootpc          0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:hostmon         0.0.0.0:*                           -                   
udp6       0      0 [::]:hostmon            [::]:*

What additional information can I give to you?

componennetworking kinbug lifecyclrotten prioritP2

Most helpful comment

You should make sure that you are using a Docker storage driver which supports capabilities, f.e. overlay2 or devicemapper. On Ubuntu aufs is the default which doesn't support capabilities.

The router Pod must have this feature available, else it can't bind to priviledged ports:

With aufs: :-1:

sh-4.2$ getcap /usr/sbin/haproxy
Failed to get capabilities of file `/usr/sbin/haproxy' (Operation not supported)

With overlay2: :+1:

sh-4.2$ getcap /usr/sbin/haproxy
/usr/sbin/haproxy = cap_net_bind_service+ep

All 5 comments

I can not reproduce with the previous ubuntu version 16.04 (Xenial)

You should make sure that you are using a Docker storage driver which supports capabilities, f.e. overlay2 or devicemapper. On Ubuntu aufs is the default which doesn't support capabilities.

The router Pod must have this feature available, else it can't bind to priviledged ports:

With aufs: :-1:

sh-4.2$ getcap /usr/sbin/haproxy
Failed to get capabilities of file `/usr/sbin/haproxy' (Operation not supported)

With overlay2: :+1:

sh-4.2$ getcap /usr/sbin/haproxy
/usr/sbin/haproxy = cap_net_bind_service+ep

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Was this page helpful?
0 / 5 - 0 ratings