Origin: Openshift Origin: getsockopt: connection refused

Created on 14 Nov 2017  ·  9Comments  ·  Source: openshift/origin

[provide a description of the issue]

I installed openshift origin at CentOS 7. Per instruction, https://docs.openshift.org/latest/getting_started/administrators.html#downloading-the-binary, I downloaded and untared, openshift-origin-server-v3.6.1-008f2d5-linux-64bit.tar.gz

sudo ./openshift start

I got errors:

./openshift start
W1113 23:03:52.193610 20449 start_master.go:297] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console, master start will continue.
W1113 23:03:52.193687 20449 start_master.go:297] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console, master start will continue.
W1113 23:03:52.193700 20449 start_master.go:297] Warning: auditConfig.auditFilePath: Required value: audit can not be logged to a separate file, master start will continue.
I1113 23:03:52.217812 20449 plugins.go:101] No cloud provider specified.
E1113 23:03:52.262645 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
2017-11-13 23:03:52.316475 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:52.316562 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:52.316615 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:52.316666 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:52.343227 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:53.293852 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:53.316992 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
E1113 23:03:53.317079 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
2017-11-13 23:03:53.375081 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:53.375155 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:53.404292 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }

Version

[provide output of the openshift version or oc version command]
./openshift version
openshift v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
etcd 3.2.1

docker version
Client:
Version: 17.11.0-ce-rc3
API version: 1.34
Go version: go1.8.3
Git commit: 5b4af4f
Built: Wed Nov 8 03:04:32 2017
OS/Arch: linux/amd64

Server:
Version: 17.11.0-ce-rc3
API version: 1.34 (minimum version 1.12)
Go version: go1.8.3
Git commit: 5b4af4f
Built: Wed Nov 8 03:07:05 2017
OS/Arch: linux/amd64
Experimental: false

oc version
oc v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Steps To Reproduce
  1. [step 1]
    Download openshift origin at https://github.com/openshift/origin/releases/download/v3.6.1/openshift-origin-server-v3.6.1-008f2d5-linux-64bit.tar.gz

  2. [step 2]
    Untar the tar ball. Run sudo ./openshift start

Current Result

W1113 23:03:52.193610 20449 start_master.go:297] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console, master start will continue.
W1113 23:03:52.193687 20449 start_master.go:297] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console, master start will continue.
W1113 23:03:52.193700 20449 start_master.go:297] Warning: auditConfig.auditFilePath: Required value: audit can not be logged to a separate file, master start will continue.
I1113 23:03:52.217812 20449 plugins.go:101] No cloud provider specified.
E1113 23:03:52.262645 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
2017-11-13 23:03:52.316475 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:52.316562 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:52.316615 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:52.316666 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:52.343227 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:53.293852 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:53.316992 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
E1113 23:03:53.317079 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
2017-11-13 23:03:53.375081 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:53.375155 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
2017-11-13 23:03:53.404292 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
I1113 23:03:54.231744 20449 start_master.go:529] Starting master on 0.0.0.0:8443 (v3.6.1+008f2d5)
I1113 23:03:54.231778 20449 start_master.go:530] Public master address is https://10.104.6.127:8443
I1113 23:03:54.231813 20449 start_master.go:534] Using images from "openshift/origin-:v3.6.1"
2017-11-13 23:03:54.232457 I | embed: peerTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2017-11-13 23:03:54.233550 I | embed: listening for peers on https://0.0.0.0:7001
2017-11-13 23:03:54.233625 I | embed: listening for client requests on 0.0.0.0:4001
2017-11-13 23:03:54.236497 I | etcdserver: name = openshift.local
2017-11-13 23:03:54.236511 I | etcdserver: data dir = openshift.local.etcd
2017-11-13 23:03:54.236519 I | etcdserver: member dir = openshift.local.etcd/member
2017-11-13 23:03:54.236527 I | etcdserver: heartbeat = 100ms
2017-11-13 23:03:54.236533 I | etcdserver: election = 1000ms
2017-11-13 23:03:54.236540 I | etcdserver: snapshot count = 100000
2017-11-13 23:03:54.236555 I | etcdserver: advertise client URLs = https://10.104.6.127:4001
2017-11-13 23:03:54.265181 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.104.6.127:4001: getsockopt: connection refused"; Reconnecting to {10.104.6.127:4001 }
E1113 23:03:54.326868 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
2017-11-13 23:03:54.363817 I | etcdserver: restarting member a7340362e2996c30 in cluster cf86d7c1b2833ba9 at commit index 931
2017-11-13 23:03:54.363929 I | raft: a7340362e2996c30 became follower at term 9
2017-11-13 23:03:54.363953 I | raft: newRaft a7340362e2996c30 [peers: [], term: 9, commit: 931, applied: 0, lastindex: 931, lastterm: 9]
2017-11-13 23:03:54.411321 W | auth: simple token is not cryptographically signed
2017-11-13 23:03:54.460208 I | etcdserver: starting server... [version: 3.2.1, cluster version: to_be_decided]
2017-11-13 23:03:54.460275 I | embed: ClientTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2017-11-13 23:03:54.462946 I | etcdserver/membership: added member a7340362e2996c30 [https://10.104.6.127:7001] to cluster cf86d7c1b2833ba9
2017-11-13 23:03:54.463078 N | etcdserver/membership: set the initial cluster version to 3.2
2017-11-13 23:03:54.463129 I | etcdserver/api: enabled capabilities for version 3.2
2017-11-13 23:03:54.764982 I | raft: a7340362e2996c30 is starting a new election at term 9
2017-11-13 23:03:54.765114 I | raft: a7340362e2996c30 became candidate at term 10
2017-11-13 23:03:54.765164 I | raft: a7340362e2996c30 received MsgVoteResp from a7340362e2996c30 at term 10
2017-11-13 23:03:54.765207 I | raft: a7340362e2996c30 became leader at term 10
2017-11-13 23:03:54.765237 I | raft: raft.node: a7340362e2996c30 elected leader a7340362e2996c30 at term 10
2017-11-13 23:03:54.767049 I | etcdserver: published {Name:openshift.local ClientURLs:[https://10.104.6.127:4001]} to cluster cf86d7c1b2833ba9
I1113 23:03:54.767110 20449 run.go:85] Started etcd at 10.104.6.127:4001
2017-11-13 23:03:54.768732 I | embed: ready to serve client requests
2017-11-13 23:03:54.769652 I | embed: serving client requests on [::]:4001
2017-11-13 23:03:55.208908 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
2017-11-13 23:03:55.211975 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
2017-11-13 23:03:55.213349 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
2017-11-13 23:03:55.213415 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
2017-11-13 23:03:55.294722 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
2017-11-13 23:03:55.294815 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
2017-11-13 23:03:55.294881 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
2017-11-13 23:03:55.358526 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
I1113 23:03:55.359063 20449 run_components.go:91] Using default project node label selector:
I1113 23:03:55.366279 20449 clusterquotamapping.go:160] Starting ClusterQuotaMappingController controller
I1113 23:03:55.366753 20449 master.go:182] Starting OAuth2 API at /oauth
I1113 23:03:55.366774 20449 master.go:190] Starting Web Console /console/
E1113 23:03:55.403546 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
E1113 23:03:55.434765 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.PolicyBinding: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/policybindings?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:55.434957 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.ClusterPolicyBinding: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/clusterpolicybindings?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:55.435033 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.ClusterPolicy: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/clusterpolicies?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:55.435103 20449 reflector.go:201] github.com/openshift/origin/pkg/quota/generated/informers/internalversion/factory.go:45: Failed to list *quota.ClusterResourceQuota: Get https://10.104.6.127:8443/apis/quota.openshift.io/v1/clusterresourcequotas?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:55.543309 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.Policy: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/policies?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
W1113 23:03:56.176027 20449 genericapiserver.go:295] Skipping API autoscaling/v2alpha1 because it has no resources.
W1113 23:03:56.193262 20449 genericapiserver.go:295] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1113 23:03:56.649148 20449 openshift_apiserver.go:237] Starting Origin API at /apis/route.openshift.io/v1
I1113 23:03:56.651189 20449 openshift_apiserver.go:237] Starting Origin API at /apis/user.openshift.io/v1
I1113 23:03:56.653092 20449 openshift_apiserver.go:237] Starting Origin API at /apis/apps.openshift.io/v1
I1113 23:03:56.653971 20449 openshift_apiserver.go:237] Starting Origin API at /apis/project.openshift.io/v1
I1113 23:03:56.657034 20449 openshift_apiserver.go:237] Starting Origin API at /apis/build.openshift.io/v1
I1113 23:03:56.659593 20449 openshift_apiserver.go:237] Starting Origin API at /apis/network.openshift.io/v1
I1113 23:03:56.662442 20449 openshift_apiserver.go:237] Starting Origin API at /apis/image.openshift.io/v1
I1113 23:03:56.992130 20449 openshift_apiserver.go:237] Starting Origin API at /apis/authorization.openshift.io/v1
I1113 23:03:56.993317 20449 openshift_apiserver.go:237] Starting Origin API at /apis/template.openshift.io/v1
I1113 23:03:56.995555 20449 openshift_apiserver.go:237] Starting Origin API at /apis/oauth.openshift.io/v1
I1113 23:03:56.996826 20449 openshift_apiserver.go:237] Starting Origin API at /apis/security.openshift.io/v1
I1113 23:03:56.999200 20449 openshift_apiserver.go:237] Starting Origin API at /apis/quota.openshift.io/v1
E1113 23:03:57.281083 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.ClusterPolicy: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/clusterpolicies?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:57.403549 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.ClusterPolicyBinding: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/clusterpolicybindings?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:57.403642 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.PolicyBinding: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/policybindings?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:57.403699 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
E1113 23:03:57.839782 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.Policy: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/policies?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:57.839870 20449 reflector.go:201] github.com/openshift/origin/pkg/quota/generated/informers/internalversion/factory.go:45: Failed to list *quota.ClusterResourceQuota: Get https://10.104.6.127:8443/apis/quota.openshift.io/v1/clusterresourcequotas?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:57.856155 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
I1113 23:03:57.888585 20449 openshift_apiserver.go:243] Started Origin API at /oapi/v1
[restful] 2017/11/13 23:03:58 log.go:30: [restful/swagger] listing is available at https://10.104.6.127:8443/swaggerapi
[restful] 2017/11/13 23:03:58 log.go:30: [restful/swagger] https://10.104.6.127:8443/swaggerui/ is mapped to folder /swagger-ui/
E1113 23:03:58.392774 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.ClusterPolicy: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/clusterpolicies?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
E1113 23:03:58.392843 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
E1113 23:03:58.454346 20449 reflector.go:201] github.com/openshift/origin/pkg/authorization/generated/informers/internalversion/factory.go:45: Failed to list *authorization.ClusterPolicyBinding: Get https://10.104.6.127:8443/apis/authorization.openshift.io/v1/clusterpolicybindings?resourceVersion=0: dial tcp 10.104.6.127:8443: getsockopt: connection refused
I1113 23:03:58.737898 20449 serve.go:86] Serving securely on 0.0.0.0:8443
W1113 23:03:58.852712 20449 lease_endpoint_reconciler.go:176] Resetting endpoints for master service "kubernetes" to [10.104.6.127]
W1113 23:03:58.979612 20449 run_components.go:60] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
I1113 23:03:58.980549 20449 logs.go:41] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
I1113 23:03:58.980568 20449 logs.go:41] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
I1113 23:03:59.109940 20449 run_components.go:86] DNS listening at 0.0.0.0:8053
E1113 23:03:59.299474 20449 controllermanager.go:337] Server isn't healthy yet. Waiting a little while.
I1113 23:04:00.068404 20449 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
I1113 23:04:00.073519 20449 docker.go:384] Start docker client with request timeout=2m0s
W1113 23:04:00.119293 20449 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
I1113 23:04:00.446202 20449 start_master.go:715] Started serviceaccount-token controller
I1113 23:04:00.592728 20449 node_config.go:367] DNS Bind to 10.104.6.127:53
I1113 23:04:00.592756 20449 start_node.go:345] Starting node sf-docker01.corp.wagerworks.com (v3.6.1+008f2d5)
I1113 23:04:00.594508 20449 start_node.go:354] Connecting to API server https://10.104.6.127:8443
I1113 23:04:00.594549 20449 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
I1113 23:04:00.594563 20449 docker.go:384] Start docker client with request timeout=2m0s
I1113 23:04:00.637320 20449 node.go:134] Connecting to Docker at unix:///var/run/docker.sock
I1113 23:04:00.757089 20449 feature_gate.go:144] feature gates: map[]
I1113 23:04:00.757550 20449 manager.go:143] cAdvisor running in container: "/"
I1113 23:04:00.864903 20449 node.go:348] Using iptables Proxier.
I1113 23:04:00.867872 20449 start_master.go:783] Started "openshift.io/image-trigger"
I1113 23:04:00.868004 20449 image_trigger_controller.go:214] Starting trigger controller
W1113 23:04:00.871559 20449 node.go:488] Failed to retrieve node info: nodes "sf-docker01.corp.wagerworks.com" not found
W1113 23:04:00.871656 20449 proxier.go:309] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
W1113 23:04:00.871667 20449 proxier.go:314] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1113 23:04:00.871684 20449 node.go:380] Tearing down userspace rules.
I1113 23:04:00.906280 20449 start_master.go:783] Started "disruption"
I1113 23:04:00.907265 20449 disruption.go:269] Starting disruption controller
W1113 23:04:00.931865 20449 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I1113 23:04:01.026731 20449 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:xfs blockSize:0} /dev/mapper/centos-root:{mountpoint:/var/lib/docker/devicemapper major:253 minor:0 fsType:xfs blockSize:0}]
I1113 23:04:01.029504 20449 manager.go:198] Machine: {NumCores:1 CpuFrequency:2533423 MemoryCapacity:16658931712 MachineID:197dcaf983ac43d49bc33a715706d364 SystemUUID:423C41D5-41C2-B4C5-FB04-E3E643ABDDC6 BootID:5833a53a-a76b-4cae-9033-f444ec05deef Filesystems:[{Device:/dev/mapper/centos-root DeviceMajor:253 DeviceMinor:0 Capacity:47724642304 Type:vfs Inodes:29083472 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:520794112 Type:vfs Inodes:512000 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:47747956736 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:5368709120 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:107374182400 Scheduler:none} 2:0:{Name:fd0 Major:2 Minor:0 Size:4096 Scheduler:deadline} 8:0:{Name:sda Major:8 Minor:0 Size:53687091200 Scheduler:deadline}] NetworkDevices:[{Name:ens160 MacAddress:00:50:56:bc:74:5e Speed:10000 Mtu:1500} {Name:virbr0 MacAddress:52:54:00:4c:ea:cb Speed:0 Mtu:1500} {Name:virbr0-nic MacAddress:52:54:00:4c:ea:cb Speed:0 Mtu:1500} {Name:virbr1 MacAddress:52:54:00:fa:ae:1e Speed:0 Mtu:1500} {Name:virbr1-nic MacAddress:52:54:00:fa:ae:1e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:17179402240 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:12582912 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
W1113 23:04:01.033371 20449 shared_informer.go:298] resyncPeriod 120000000000 is smaller than resyncCheckPeriod 600000000000 and the informer has already started. Changing it to 600000000000
I1113 23:04:01.060034 20449 manager.go:204] Version: {KernelVersion:3.10.0-693.5.2.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:17.11.0-ce-rc3 DockerAPIVersion:1.34 CadvisorVersion: CadvisorRevision:}
I1113 23:04:01.061109 20449 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I1113 23:04:01.099524 20449 start_master.go:783] Started "openshift.io/service-serving-cert"
I1113 23:04:01.101041 20449 start_master.go:783] Started "openshift.io/origin-to-rbac"
I1113 23:04:01.101217 20449 generic.go:33] Starting OriginRoleBindingToRBACRoleBindingController controller
I1113 23:04:01.101535 20449 generic.go:33] Starting OriginClusterRoleToRBACClusterRoleController controller
I1113 23:04:01.101577 20449 generic.go:33] Starting OriginClusterRoleBindingToRBACClusterRoleBindingController controller
I1113 23:04:01.101613 20449 generic.go:33] Starting OriginRoleToRBACRoleController controller
W1113 23:04:01.162800 20449 container_manager_linux.go:217] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
I1113 23:04:01.163107 20449 container_manager_linux.go:244] container manager verified user specified cgroup-root exists: /
I1113 23:04:01.163136 20449 container_manager_linux.go:249] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd ProtectKernelDefaults:false EnableCRI:true NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} ExperimentalQOSReserved:map[]}
I1113 23:04:01.163380 20449 kubelet.go:265] Watching apiserver
I1113 23:04:01.234245 20449 start_master.go:783] Started "horizontalpodautoscaling"
I1113 23:04:01.234420 20449 horizontal.go:140] Starting HPA Controller
E1113 23:04:01.283892 20449 util.go:45] Metric for serviceaccount_controller already registered
I1113 23:04:01.284008 20449 start_master.go:783] Started "serviceaccount"
I1113 23:04:01.284192 20449 serviceaccounts_controller.go:122] Starting ServiceAccount controller
W1113 23:04:01.367638 20449 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I1113 23:04:01.367683 20449 kubelet.go:494] Hairpin mode set to "hairpin-veth"
I1113 23:04:01.447781 20449 start_master.go:783] Started "openshift.io/build-config-change"
W1113 23:04:01.447819 20449 start_master.go:765] "ttl" is skipped
W1113 23:04:01.463162 20449 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
I1113 23:04:01.508582 20449 start_master.go:783] Started "replicationcontroller"
W1113 23:04:01.508622 20449 start_master.go:765] "bootstrapsigner" is skipped
I1113 23:04:01.508744 20449 replication_controller.go:150] Starting RC Manager
I1113 23:04:01.625230 20449 start_master.go:783] Started "openshift.io/serviceaccount-pull-secrets"
I1113 23:04:02.039395 20449 docker_service.go:184] Docker cri networking managed by kubernetes.io/no-op
F1113 23:04:02.137046 20449 node.go:281] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

Expected Result
Additional Information

[try to run $ oc adm diagnostics (or oadm diagnostics) command if possible]
[if you are reporting issue related to builds, provide build logs with BUILD_LOGLEVEL=5]
[consider attaching output of the $ oc get all -o json -n <namespace> command to the issue]
[visit https://docs.openshift.org/latest/welcome/index.html]

componeninstall kinquestion lifecyclrotten prioritP3

All 9 comments

@garyyang6 maybe you need use another install solution to do cluster up.

https://github.com/openshift/openshift-ansible is optional way.

@xiaods I am using Openshift all-in-one package. Did you suggest there are bugs in the package and the installation instructions are not correct? If that is the case, can you please file a bug? Thanks.

seeing same. Not a great out-of-the-box experience. Minikube just worked.

the 3.7 distro works per instructions

@garyyang6
The last line of your log seems to be similar to another issue I faced when installing OpenShift Origin. Below is the link.

https://groups.google.com/forum/#!topic/openshift/SQ3Mjhb8fIo

Hope it helps

@xiaods I'm getting a similar error trying to install openshift origin v3.7 with openshift-ansible. It fails on the [openshift_master : restart master api] task with error:

Unable to restart service origin-master-api: Job for origin-master-api.service failed because the control process exited with error code. See \"systemctl status origin-master-api.service\" and \"journalctl -xe\" for details.\n

sudo systemctl status origin-master-api.service -l
● origin-master-api.service - Atomic OpenShift Master API
   Loaded: loaded (/usr/lib/systemd/system/origin-master-api.service; enabled; vendor preset: disabled)
   Active: activating (start) since Thu 2018-03-15 13:40:07 CET; 14s ago
     Docs: https://github.com/openshift/origin
 Main PID: 80662 (openshift)
   CGroup: /system.slice/origin-master-api.service
           └─80662 /usr/bin/openshift start master api --config=/etc/origin/master/master-config.yaml --loglevel=2 --listen=https://0.0.0.0:8443 --master=https://t-tmp-pacman-master01.yousee.idk:8443

Mar 15 13:40:10 t-tmp-pacman-master01.yousee.idk openshift[80662]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:2379: getsockopt: connection refused"; Reconnecting to {t-tmp-pacman-master01.yousee.idk:2379 <nil>}
Mar 15 13:40:10 t-

This is happening on a VMWare esxi server running centos 7.4. I have tested the same on vagrant VMs and on Amazon WS and it works there. So I'm stumped as to why it's only happening on the VMWare VMs. Is it something network related?

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Was this page helpful?
0 / 5 - 0 ratings