Describe your issue here
When trying to bring up a cluster on Amazon EC2 it continually fails the health check. If you perform the check manually by taking the IP address and changing the URL everything is fine (returns ok).
Seems to be an issue on timing but I have played with different zones (eu-central, eu-west), I have renamed the machines in the cluster each time (as there is another issue with keypairs already being created if you use the same name) and deleted the security group. I've also changed the machine config to use something more 'powerful' in case the t2.mlcro is too 'small' for the etcd or controller.
Ironically the first time I used rancher to do this is worked perfectly. The only difference since I tried this was to put an nginx reverse proxy in front of the docker image to use our own SSL certificate and domain name. I will revert this as well but I can no longer teardown and recreate.
This problem is related to the reverse proxy. If you deactivate nginx and fall back to the docker container (with warnings over the SSL) it provisions ok.
| Useful | Info |
| :-- | :-- |
|Versions|Rancher v2.0.0-beta2 UI: v2.0.34 |
|Access|local admin|
|Route|global-admin.clusters.index|
Do I understand correctly that it all works, except when you put nginx in front? Can you cross reference the nginx config with https://gist.github.com/superseb/6a7bfa883a1b93e4dfa5e357863f5bf8?
Yep all works except with the nginx config. The config you reference no longer works with certain nginx installations so I had to modify it (there have been other comments to this effect on the web). When running on Ubuntu 16 with nginx/1.10.3 (Ubuntu) you have to wrap the config 鈽濓笍 with http { (for example). So complete config is:
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream target {
server localhost:8080;
}
server {
listen 443 ssl spdy;
server_name rancher-staging.xxxx.fr;
ssl_certificate /home/ubuntu/tls.crt;
ssl_certificate_key /home/ubuntu/tls.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://target;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
}
}
server {
listen 80;
server_name rancher-staging.xxxx.fr;
return 301 https://$server_name$request_uri;
}
}
Did you test if the cluster is reachable through the proxy? Do you get any logging from nginx? You are missing the map containing $http_upgrade in your posted config:
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
I can follow up on the logging, it was only this morning out of frustration that I removed it in order to 'see' if this was the issue. I'll test this out on a fresh install
Here's the nginx location statement I've been using to put my rancher behind my own cert. Seems to be working well enough
note the proxy_pass to https and the proxy_ssl_session_reuse off;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://ranchboss;
proxy_ssl_session_reuse off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 900s;
}
EDIT: I should mention that using kubectl with this config requires me to remove the certificate-authority-data: from the rancher-provided kubeconfig or I see unknown CA errors.
OK stil not working guys: nginx config is now:
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream target {
server localhost:8080;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name rancher.xxxx.fr;
ssl_certificate ./tls.crt;
ssl_certificate_key ./tls.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://target;
proxy_ssl_session_reuse off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
}
}
server {
listen 80;
server_name rancher.xxx.fr;
return 301 https://$server_name$request_uri;
}
}
dockerized rancher started with:
sudo docker run -d --restart=unless-stopped -p 8080:80 -p 8443:443 rancher/server:preview
--
--
The rancher sever expects connections via https (AFAIK), but you're calling the upstream rancher server over http
proxy_pass http://target;
While I'm calling it over https
proxy_pass https://ranchboss;
Changed to https:
proxy_pass https://target;
And therefore changed the upstream target as well:
server localhost:8443;
So that the port matches the protocol. Same result
I am running the nginx proxy on a separate host, and my upstream is this
upstream ranchboss {
server 10.1.2.133:443;
}
and I've started the rancher server as
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/server:v2.0.0-beta3
I reread your OP, and should mention I'm running a custom cluster, with a few on-site esxi hosts serving the cluster nodes and rancher, (rather than on EC2), but the nginx frontend is running in another DC, 50-100ms away. The rancher server and cluster nodes are on the same physical network.
Well - hmm. I know I had a beta3 cluster running yesterday, but after two attempts I am unable to get one running today (same config, fresh hosts), with similar dialer errors.
I am able to use beta2 without a problem. I was able to upgrade to beta3 once I had a fully active beta2 cluster. I did see some dialer errors during the upgrade, which seemed to hang on those errors. I restarted rancher-server and the upgrade completed successfully.
I have tried the Beta2 and beta3 with EC2 Node Drivers and the problem is present
When I do direct request with GET https://NODE1:6443/healthz
respond: ok
I am also getting the same issue with the GA release of v2.0.0
Any resolution to this, had a previous setup before in rc1 which did work.
Same here. After reusing a node.
Clean script
#!/bin/sh
docker rm -f $(docker ps -qa)
docker volume rm $(docker volume ls -q)
cleanupdirs="/var/lib/etcd* /etc/kubernete* /etc/cni* /opt/cni* /var/lib/cni* /var/run/calico* /var/lib/kubelet*"
for dir in $cleanupdirs; do
echo "Removing $dir"
rm -Rf $dir
done
Versions
[root@AL-LINUX00994 ~]# docker version
Client:
Version: 1.13.1
API version: 1.26
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Experimental: false
[root@AL-LINUX00994 ~]# uname -a
Linux AL-LINUX00994 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@AL-LINUX00994 ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Server
docker run -d --name=rancher --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:v2.0.0
Rancher log
2018-05-07T18:14:07.847073000Z 2018/05/07 18:14:07 [INFO] Listening on /tmp/log.sock
2018-05-07T18:14:07.847481000Z 2018/05/07 18:14:07 [INFO] [certificates] Generating CA kubernetes certificates
2018-05-07T18:14:08.398869000Z 2018/05/07 18:14:08 [INFO] [certificates] Generating Kubernetes API server certificates
2018-05-07T18:14:09.024915000Z 2018/05/07 18:14:09 [INFO] [certificates] Generating Kube Controller certificates
2018-05-07T18:14:09.624832000Z 2018/05/07 18:14:09 [INFO] [certificates] Generating Kube Scheduler certificates
2018-05-07T18:14:09.754943000Z 2018/05/07 18:14:09 [INFO] [certificates] Generating Kube Proxy certificates
2018-05-07T18:14:09.994844000Z 2018/05/07 18:14:09 [INFO] [certificates] Generating Node certificate
2018-05-07T18:14:10.306797000Z 2018/05/07 18:14:10 [INFO] [certificates] Generating admin certificates and kubeconfig
2018-05-07T18:14:10.552081000Z 2018/05/07 18:14:10 [INFO] [certificates] Generating etcd-127.0.0.1 certificate and key
2018-05-07T18:14:11.335214000Z 2018/05/07 18:14:11 [INFO] Running kube-apiserver --cloud-provider= --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --insecure-port=0 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --advertise-address=10.43.0.1 --secure-port=6443 --storage-backend=etcd3 --service-cluster-ip-range=10.43.0.0/16 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-account-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --admission-control=ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --insecure-bind-address=127.0.0.1 --allow-privileged=true --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-servers=https://127.0.0.1:2379 --etcd-prefix=/registry -v=1 --logtostderr=false --alsologtostderr=false
2018-05-07T18:14:11.337299000Z 2018/05/07 18:14:11 [INFO] Activating driver aks
2018-05-07T18:14:11.337640000Z 2018/05/07 18:14:11 [INFO] Running etcd --peer-client-cert-auth --client-cert-auth --initial-cluster-token=etcd-cluster-1 --name=etcd-master --advertise-client-urls=https://127.0.0.1:2379,https://127.0.0.1:4001 --initial-advertise-peer-urls=https://127.0.0.1:2380 --initial-cluster=etcd-master=https://127.0.0.1:2380 --initial-cluster-state=new --cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --data-dir=/var/lib/rancher/etcd --listen-client-urls=https://0.0.0.0:2379 --key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --listen-peer-urls=https://0.0.0.0:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem
2018-05-07T18:14:11.339020000Z 2018/05/07 18:14:11 [INFO] Activating driver aks done
2018-05-07T18:14:11.339412000Z 2018/05/07 18:14:11 [INFO] Activating driver eks
2018-05-07T18:14:11.339825000Z 2018/05/07 18:14:11 [INFO] Activating driver eks done
2018-05-07T18:14:11.340283000Z 2018/05/07 18:14:11 [INFO] Activating driver import
2018-05-07T18:14:11.340710000Z 2018/05/07 18:14:11 [INFO] Activating driver import done
2018-05-07T18:14:11.341087000Z 2018/05/07 18:14:11 [INFO] Activating driver rke
2018-05-07T18:14:11.341434000Z 2018/05/07 18:14:11 [INFO] Activating driver rke done
2018-05-07T18:14:11.341844000Z 2018/05/07 18:14:11 [INFO] Activating driver gke
2018-05-07T18:14:11.342262000Z 2018/05/07 18:14:11 [INFO] Activating driver gke done
2018-05-07T18:14:11.346004000Z 2018/05/07 18:14:11 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-07T18:14:11.346434000Z 2018/05/07 18:14:11 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-07T18:14:12.335703000Z 2018/05/07 18:14:12 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-07T18:14:13.340408000Z 2018/05/07 18:14:13 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-07T18:14:13.340843000Z 2018/05/07 18:14:13 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-07T18:14:14.336340000Z 2018/05/07 18:14:14 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-07T18:14:15.340939000Z 2018/05/07 18:14:15 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-07T18:14:15.341320000Z 2018/05/07 18:14:15 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-07T18:14:16.337071000Z 2018/05/07 18:14:16 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-07T18:14:17.360690000Z 2018/05/07 18:14:17 [INFO] Running kube-controller-manager --cluster-cidr=10.42.0.0/16 --address=0.0.0.0 --leader-elect=true --node-monitor-grace-period=40s --v=2 --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --service-account-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-untagged-cloud=true --configure-cloud-routes=false --allocate-node-cidrs=true --cloud-provider= --use-service-account-credentials=true -v=1 --logtostderr=false --alsologtostderr=false
2018-05-07T18:14:17.372387000Z 2018/05/07 18:14:17 [INFO] Creating CRD authconfigs.management.cattle.io
2018-05-07T18:14:17.375823000Z 2018/05/07 18:14:17 [INFO] Creating CRD clusteralerts.management.cattle.io
2018-05-07T18:14:17.379875000Z 2018/05/07 18:14:17 [INFO] Creating CRD projectalerts.management.cattle.io
2018-05-07T18:14:17.382194000Z 2018/05/07 18:14:17 [INFO] Creating CRD notifiers.management.cattle.io
2018-05-07T18:14:17.384329000Z 2018/05/07 18:14:17 [INFO] Creating CRD catalogs.management.cattle.io
2018-05-07T18:14:17.387001000Z 2018/05/07 18:14:17 [INFO] Creating CRD clusterevents.management.cattle.io
2018-05-07T18:14:17.388314000Z 2018/05/07 18:14:17 [INFO] Creating CRD clusterloggings.management.cattle.io
2018-05-07T18:14:17.392656000Z 2018/05/07 18:14:17 [INFO] Creating CRD clusterregistrationtokens.management.cattle.io
2018-05-07T18:14:17.396352000Z 2018/05/07 18:14:17 [INFO] Creating CRD clusterroletemplatebindings.management.cattle.io
2018-05-07T18:14:17.399430000Z 2018/05/07 18:14:17 [INFO] Creating CRD clusters.management.cattle.io
2018-05-07T18:14:17.539078000Z 2018/05/07 18:14:17 [INFO] Creating CRD clustercomposeconfigs.management.cattle.io
2018-05-07T18:14:17.741318000Z 2018/05/07 18:14:17 [INFO] Creating CRD globalcomposeconfigs.management.cattle.io
2018-05-07T18:14:17.939395000Z 2018/05/07 18:14:17 [INFO] Creating CRD dynamicschemas.management.cattle.io
2018-05-07T18:14:18.138490000Z 2018/05/07 18:14:18 [INFO] Creating CRD globalrolebindings.management.cattle.io
2018-05-07T18:14:18.339857000Z 2018/05/07 18:14:18 [INFO] Creating CRD globalroles.management.cattle.io
2018-05-07T18:14:18.340335000Z 2018/05/07 18:14:18 [INFO] Running kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --v=2 --address=0.0.0.0 -v=1 --logtostderr=false --alsologtostderr=false
2018-05-07T18:14:18.391204000Z 2018/05/07 18:14:18 [INFO] Creating CRD apps.project.cattle.io
2018-05-07T18:14:18.538912000Z 2018/05/07 18:14:18 [INFO] Creating CRD groupmembers.management.cattle.io
2018-05-07T18:14:18.739595000Z 2018/05/07 18:14:18 [INFO] Creating CRD apprevisions.project.cattle.io
2018-05-07T18:14:18.939687000Z 2018/05/07 18:14:18 [INFO] Creating CRD groups.management.cattle.io
2018-05-07T18:14:19.139582000Z 2018/05/07 18:14:19 [INFO] Creating CRD namespacecomposeconfigs.project.cattle.io
2018-05-07T18:14:19.339819000Z 2018/05/07 18:14:19 [INFO] Creating CRD listenconfigs.management.cattle.io
2018-05-07T18:14:19.739641000Z 2018/05/07 18:14:19 [INFO] Creating CRD nodes.management.cattle.io
2018-05-07T18:14:20.138755000Z 2018/05/07 18:14:20 [INFO] Creating CRD nodepools.management.cattle.io
2018-05-07T18:14:20.339292000Z 2018/05/07 18:14:20 [INFO] Creating CRD nodedrivers.management.cattle.io
2018-05-07T18:14:20.538987000Z 2018/05/07 18:14:20 [INFO] Creating CRD nodetemplates.management.cattle.io
2018-05-07T18:14:20.739149000Z 2018/05/07 18:14:20 [INFO] Creating CRD podsecuritypolicytemplates.management.cattle.io
2018-05-07T18:14:20.938887000Z 2018/05/07 18:14:20 [INFO] Creating CRD podsecuritypolicytemplateprojectbindings.management.cattle.io
2018-05-07T18:14:21.139271000Z 2018/05/07 18:14:21 [INFO] Creating CRD preferences.management.cattle.io
2018-05-07T18:14:21.338934000Z 2018/05/07 18:14:21 [INFO] Creating CRD projectloggings.management.cattle.io
2018-05-07T18:14:21.539674000Z 2018/05/07 18:14:21 [INFO] Creating CRD projectnetworkpolicies.management.cattle.io
2018-05-07T18:14:21.739531000Z 2018/05/07 18:14:21 [INFO] Creating CRD projectroletemplatebindings.management.cattle.io
2018-05-07T18:14:21.939318000Z 2018/05/07 18:14:21 [INFO] Creating CRD projects.management.cattle.io
2018-05-07T18:14:22.139997000Z 2018/05/07 18:14:22 [INFO] Creating CRD roletemplates.management.cattle.io
2018-05-07T18:14:22.339140000Z 2018/05/07 18:14:22 [INFO] Creating CRD settings.management.cattle.io
2018-05-07T18:14:22.539620000Z 2018/05/07 18:14:22 [INFO] Creating CRD templates.management.cattle.io
2018-05-07T18:14:22.739368000Z 2018/05/07 18:14:22 [INFO] Creating CRD templateversions.management.cattle.io
2018-05-07T18:14:22.939191000Z 2018/05/07 18:14:22 [INFO] Creating CRD templatecontents.management.cattle.io
2018-05-07T18:14:23.139292000Z 2018/05/07 18:14:23 [INFO] Creating CRD clusterpipelines.management.cattle.io
2018-05-07T18:14:23.338974000Z 2018/05/07 18:14:23 [INFO] Creating CRD pipelines.management.cattle.io
2018-05-07T18:14:23.538870000Z 2018/05/07 18:14:23 [INFO] Creating CRD pipelineexecutions.management.cattle.io
2018-05-07T18:14:23.739398000Z 2018/05/07 18:14:23 [INFO] Creating CRD pipelineexecutionlogs.management.cattle.io
2018-05-07T18:14:23.939338000Z 2018/05/07 18:14:23 [INFO] Creating CRD sourcecodecredentials.management.cattle.io
2018-05-07T18:14:24.139939000Z 2018/05/07 18:14:24 [INFO] Creating CRD sourcecoderepositories.management.cattle.io
2018-05-07T18:14:24.339410000Z 2018/05/07 18:14:24 [INFO] Creating CRD tokens.management.cattle.io
2018-05-07T18:14:24.539848000Z 2018/05/07 18:14:24 [INFO] Creating CRD users.management.cattle.io
2018-05-07T18:14:25.079001000Z 2018/05/07 18:14:25 [INFO] Starting API controllers
2018-05-07T18:14:25.079432000Z 2018/05/07 18:14:25 [INFO] Syncing SecretController Controller
2018-05-07T18:14:25.083529000Z 2018/05/07 18:14:25 [INFO] Syncing ListenConfigController Controller
2018-05-07T18:14:25.083949000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterPipelineController Controller
2018-05-07T18:14:25.084372000Z 2018/05/07 18:14:25 [INFO] Syncing GroupController Controller
2018-05-07T18:14:25.084783000Z 2018/05/07 18:14:25 [INFO] Syncing AuthConfigController Controller
2018-05-07T18:14:25.085185000Z 2018/05/07 18:14:25 [INFO] Syncing SourceCodeCredentialController Controller
2018-05-07T18:14:25.085568000Z 2018/05/07 18:14:25 [INFO] Syncing DynamicSchemaController Controller
2018-05-07T18:14:25.086022000Z 2018/05/07 18:14:25 [INFO] Syncing SourceCodeRepositoryController Controller
2018-05-07T18:14:25.086422000Z 2018/05/07 18:14:25 [INFO] Syncing NodeDriverController Controller
2018-05-07T18:14:25.086798000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectController Controller
2018-05-07T18:14:25.087214000Z 2018/05/07 18:14:25 [INFO] Syncing GroupMemberController Controller
2018-05-07T18:14:25.087573000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectRoleTemplateBindingController Controller
2018-05-07T18:14:25.087975000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRegistrationTokenController Controller
2018-05-07T18:14:25.088421000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterController Controller
2018-05-07T18:14:25.088847000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleBindingController Controller
2018-05-07T18:14:25.090931000Z 2018/05/07 18:14:25 [INFO] Syncing NodeController Controller
2018-05-07T18:14:25.091455000Z 2018/05/07 18:14:25 [INFO] Syncing RoleBindingController Controller
2018-05-07T18:14:25.091913000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleTemplateBindingController Controller
2018-05-07T18:14:25.092419000Z 2018/05/07 18:14:25 [INFO] Syncing NodeController Controller
2018-05-07T18:14:25.092810000Z 2018/05/07 18:14:25 [INFO] Syncing UserController Controller
2018-05-07T18:14:25.093345000Z 2018/05/07 18:14:25 [INFO] Syncing SettingController Controller
2018-05-07T18:14:25.093819000Z 2018/05/07 18:14:25 [INFO] Syncing TokenController Controller
2018-05-07T18:14:25.094293000Z 2018/05/07 18:14:25 [INFO] Syncing RoleController Controller
2018-05-07T18:14:25.100604000Z 2018/05/07 18:14:25 [INFO] Syncing PipelineController Controller
2018-05-07T18:14:25.100975000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleController Controller
2018-05-07T18:14:25.205526000Z 2018/05/07 18:14:25 [INFO] Syncing SecretController Controller Done
2018-05-07T18:14:25.205888000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRegistrationTokenController Controller Done
2018-05-07T18:14:25.301503000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleController Controller Done
2018-05-07T18:14:25.301882000Z 2018/05/07 18:14:25 [INFO] Syncing ListenConfigController Controller Done
2018-05-07T18:14:25.302379000Z 2018/05/07 18:14:25 [INFO] Syncing GroupController Controller Done
2018-05-07T18:14:25.302744000Z 2018/05/07 18:14:25 [INFO] Syncing RoleBindingController Controller Done
2018-05-07T18:14:25.303076000Z 2018/05/07 18:14:25 [INFO] Syncing UserController Controller Done
2018-05-07T18:14:25.303444000Z 2018/05/07 18:14:25 [INFO] Syncing SettingController Controller Done
2018-05-07T18:14:25.303785000Z 2018/05/07 18:14:25 [INFO] Syncing TokenController Controller Done
2018-05-07T18:14:25.304189000Z 2018/05/07 18:14:25 [INFO] Syncing RoleController Controller Done
2018-05-07T18:14:25.379558000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterPipelineController Controller Done
2018-05-07T18:14:25.380027000Z 2018/05/07 18:14:25 [INFO] Syncing AuthConfigController Controller Done
2018-05-07T18:14:25.380470000Z 2018/05/07 18:14:25 [INFO] Syncing SourceCodeCredentialController Controller Done
2018-05-07T18:14:25.380878000Z 2018/05/07 18:14:25 [INFO] Syncing GroupMemberController Controller Done
2018-05-07T18:14:25.381314000Z 2018/05/07 18:14:25 [INFO] Syncing DynamicSchemaController Controller Done
2018-05-07T18:14:25.381729000Z 2018/05/07 18:14:25 [INFO] Syncing NodeDriverController Controller Done
2018-05-07T18:14:25.382144000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectController Controller Done
2018-05-07T18:14:25.382621000Z 2018/05/07 18:14:25 [INFO] Syncing SourceCodeRepositoryController Controller Done
2018-05-07T18:14:25.383022000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectRoleTemplateBindingController Controller Done
2018-05-07T18:14:25.383422000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterController Controller Done
2018-05-07T18:14:25.383831000Z 2018/05/07 18:14:25 [INFO] Syncing NodeController Controller Done
2018-05-07T18:14:25.384331000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleBindingController Controller Done
2018-05-07T18:14:25.384765000Z 2018/05/07 18:14:25 [INFO] Syncing NodeController Controller Done
2018-05-07T18:14:25.385228000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleTemplateBindingController Controller Done
2018-05-07T18:14:25.385655000Z 2018/05/07 18:14:25 [INFO] Syncing PipelineController Controller Done
2018-05-07T18:14:25.402022000Z 2018/05/07 18:14:25 [INFO] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"cattle-controllers", UID:"7965ca32-5222-11e8-b4d1-0242ac110002", APIVersion:"v1", ResourceVersion:"256", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 3fdb0b0fffd7 became leader
2018-05-07T18:14:25.402541000Z 2018/05/07 18:14:25 [INFO] Starting catalog controller
2018-05-07T18:14:25.406851000Z 2018/05/07 18:14:25 [INFO] Starting management controllers
2018-05-07T18:14:25.412751000Z 2018/05/07 18:14:25 [INFO] Syncing SecretController Controller
2018-05-07T18:14:25.413223000Z 2018/05/07 18:14:25 [INFO] Syncing RoleController Controller
2018-05-07T18:14:25.413589000Z 2018/05/07 18:14:25 [INFO] Syncing PodSecurityPolicyTemplateProjectBindingController Controller
2018-05-07T18:14:25.413993000Z 2018/05/07 18:14:25 [INFO] Syncing NamespaceController Controller
2018-05-07T18:14:25.414430000Z 2018/05/07 18:14:25 [INFO] Syncing RoleBindingController Controller
2018-05-07T18:14:25.414850000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleController Controller
2018-05-07T18:14:25.415263000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectController Controller
2018-05-07T18:14:25.415677000Z 2018/05/07 18:14:25 [INFO] Syncing RoleTemplateController Controller
2018-05-07T18:14:25.416060000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterController Controller
2018-05-07T18:14:25.416402000Z 2018/05/07 18:14:25 [INFO] Syncing GlobalRoleController Controller
2018-05-07T18:14:25.416853000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectRoleTemplateBindingController Controller
2018-05-07T18:14:25.417307000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleTemplateBindingController Controller
2018-05-07T18:14:25.417716000Z 2018/05/07 18:14:25 [INFO] Syncing GlobalRoleBindingController Controller
2018-05-07T18:14:25.418130000Z 2018/05/07 18:14:25 [INFO] Syncing TokenController Controller
2018-05-07T18:14:25.418567000Z 2018/05/07 18:14:25 [INFO] Syncing UserController Controller
2018-05-07T18:14:25.418896000Z 2018/05/07 18:14:25 [INFO] Syncing CatalogController Controller
2018-05-07T18:14:25.419329000Z 2018/05/07 18:14:25 [INFO] Syncing NodeController Controller
2018-05-07T18:14:25.419736000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterEventController Controller
2018-05-07T18:14:25.420160000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectAlertController Controller
2018-05-07T18:14:25.420602000Z 2018/05/07 18:14:25 [INFO] Syncing PodSecurityPolicyTemplateController Controller
2018-05-07T18:14:25.420995000Z 2018/05/07 18:14:25 [INFO] Syncing GlobalComposeConfigController Controller
2018-05-07T18:14:25.421469000Z 2018/05/07 18:14:25 [INFO] Syncing DynamicSchemaController Controller
2018-05-07T18:14:25.421856000Z 2018/05/07 18:14:25 [INFO] Syncing NodeDriverController Controller
2018-05-07T18:14:25.422368000Z 2018/05/07 18:14:25 [INFO] Syncing NodePoolController Controller
2018-05-07T18:14:25.422796000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleBindingController Controller
2018-05-07T18:14:25.549807000Z 2018/05/07 18:14:25 [INFO] Syncing SecretController Controller Done
2018-05-07T18:14:25.550291000Z 2018/05/07 18:14:25 [INFO] Syncing RoleController Controller Done
2018-05-07T18:14:25.550695000Z 2018/05/07 18:14:25 [INFO] Syncing NamespaceController Controller Done
2018-05-07T18:14:25.551026000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleController Controller Done
2018-05-07T18:14:25.551351000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectController Controller Done
2018-05-07T18:14:25.551660000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterController Controller Done
2018-05-07T18:14:25.552165000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectRoleTemplateBindingController Controller Done
2018-05-07T18:14:25.552473000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleTemplateBindingController Controller Done
2018-05-07T18:14:25.552803000Z 2018/05/07 18:14:25 [INFO] Syncing TokenController Controller Done
2018-05-07T18:14:25.553132000Z 2018/05/07 18:14:25 [INFO] Syncing UserController Controller Done
2018-05-07T18:14:25.622686000Z 2018/05/07 18:14:25 [INFO] Syncing RoleBindingController Controller Done
2018-05-07T18:14:25.623049000Z 2018/05/07 18:14:25 [INFO] Syncing GlobalRoleController Controller Done
2018-05-07T18:14:25.623374000Z 2018/05/07 18:14:25 [INFO] Syncing GlobalRoleBindingController Controller Done
2018-05-07T18:14:25.623686000Z 2018/05/07 18:14:25 [INFO] Syncing NodeController Controller Done
2018-05-07T18:14:25.624010000Z 2018/05/07 18:14:25 [INFO] Syncing DynamicSchemaController Controller Done
2018-05-07T18:14:25.624349000Z 2018/05/07 18:14:25 [INFO] Syncing NodeDriverController Controller Done
2018-05-07T18:14:25.642613000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterRoleBindingController Controller Done
2018-05-07T18:14:25.707320000Z 2018/05/07 18:14:25 [INFO] Syncing PodSecurityPolicyTemplateProjectBindingController Controller Done
2018-05-07T18:14:25.707714000Z 2018/05/07 18:14:25 [INFO] Syncing RoleTemplateController Controller Done
2018-05-07T18:14:25.708379000Z 2018/05/07 18:14:25 [INFO] Syncing CatalogController Controller Done
2018-05-07T18:14:25.708770000Z 2018/05/07 18:14:25 [INFO] Syncing ProjectAlertController Controller Done
2018-05-07T18:14:25.709132000Z 2018/05/07 18:14:25 [INFO] Syncing ClusterEventController Controller Done
2018-05-07T18:14:25.709452000Z 2018/05/07 18:14:25 [INFO] Syncing PodSecurityPolicyTemplateController Controller Done
2018-05-07T18:14:25.709813000Z 2018/05/07 18:14:25 [INFO] Syncing GlobalComposeConfigController Controller Done
2018-05-07T18:14:25.710738000Z 2018/05/07 18:14:25 [INFO] Syncing NodePoolController Controller Done
2018-05-07T18:14:25.879599000Z 2018/05/07 18:14:25 [INFO] Reconciling GlobalRoles
2018-05-07T18:14:25.881065000Z 2018/05/07 18:14:25 [INFO] Creating catalogs-use
2018-05-07T18:14:25.883704000Z 2018/05/07 18:14:25 [INFO] Creating roles-manage
2018-05-07T18:14:25.887980000Z 2018/05/07 18:14:25 [INFO] Creating settings-manage
2018-05-07T18:14:25.892619000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-catalogs-use for corresponding GlobalRole
2018-05-07T18:14:25.893066000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-roles-manage for corresponding GlobalRole
2018-05-07T18:14:25.893595000Z 2018/05/07 18:14:25 [INFO] Creating authn-manage
2018-05-07T18:14:25.899061000Z 2018/05/07 18:14:25 [INFO] Creating podsecuritypolicytemplates-manage
2018-05-07T18:14:25.899877000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-settings-manage for corresponding GlobalRole
2018-05-07T18:14:25.906580000Z 2018/05/07 18:14:25 [INFO] Creating admin
2018-05-07T18:14:25.907609000Z 2018/05/07 18:14:25 [INFO] Creating user
2018-05-07T18:14:25.908011000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-podsecuritypolicytemplates-manage for corresponding GlobalRole
2018-05-07T18:14:25.909455000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-authn-manage for corresponding GlobalRole
2018-05-07T18:14:25.912965000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-admin for corresponding GlobalRole
2018-05-07T18:14:25.917568000Z 2018/05/07 18:14:25 [INFO] Creating clusters-create
2018-05-07T18:14:25.927360000Z 2018/05/07 18:14:25 [INFO] Creating nodedrivers-manage
2018-05-07T18:14:25.927940000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-user for corresponding GlobalRole
2018-05-07T18:14:25.928492000Z 2018/05/07 18:14:25 [INFO] Creating catalogs-manage
2018-05-07T18:14:25.929886000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-clusters-create for corresponding GlobalRole
2018-05-07T18:14:25.935812000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-nodedrivers-manage for corresponding GlobalRole
2018-05-07T18:14:25.936300000Z 2018/05/07 18:14:25 [INFO] Creating users-manage
2018-05-07T18:14:25.943715000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-catalogs-manage for corresponding GlobalRole
2018-05-07T18:14:25.944183000Z 2018/05/07 18:14:25 [INFO] Creating user-base
2018-05-07T18:14:25.948336000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-users-manage for corresponding GlobalRole
2018-05-07T18:14:25.948773000Z 2018/05/07 18:14:25 [INFO] Reconciling RoleTemplates
2018-05-07T18:14:25.953378000Z 2018/05/07 18:14:25 [INFO] Creating ingress-view
2018-05-07T18:14:25.953788000Z 2018/05/07 18:14:25 [INFO] Creating new ClusterRole cattle-globalrole-user-base for corresponding GlobalRole
2018-05-07T18:14:25.957828000Z 2018/05/07 18:14:25 [INFO] Creating configmaps-view
2018-05-07T18:14:25.961912000Z 2018/05/07 18:14:25 [INFO] Creating serviceaccounts-manage
2018-05-07T18:14:25.964850000Z 2018/05/07 18:14:25 [INFO] Creating cluster-member
2018-05-07T18:14:25.968869000Z 2018/05/07 18:14:25 [INFO] Creating clusterroletemplatebindings-view
2018-05-07T18:14:25.972409000Z 2018/05/07 18:14:25 [INFO] Creating project-owner
2018-05-07T18:14:25.975897000Z 2018/05/07 18:14:25 [INFO] Creating create-ns
2018-05-07T18:14:25.978217000Z 2018/05/07 18:14:25 [INFO] Creating workloads-view
2018-05-07T18:14:25.980995000Z 2018/05/07 18:14:25 [INFO] Creating projectroletemplatebindings-manage
2018-05-07T18:14:25.984208000Z 2018/05/07 18:14:25 [INFO] Creating storage-manage
2018-05-07T18:14:25.987287000Z 2018/05/07 18:14:25 [INFO] Creating projectroletemplatebindings-view
2018-05-07T18:14:25.990411000Z 2018/05/07 18:14:25 [INFO] Creating cluster-admin
2018-05-07T18:14:25.992938000Z 2018/05/07 18:14:25 [INFO] Creating project-member
2018-05-07T18:14:25.996369000Z 2018/05/07 18:14:25 [INFO] Creating read-only
2018-05-07T18:14:26.000553000Z 2018/05/07 18:14:25 [INFO] Creating secrets-view
2018-05-07T18:14:26.003352000Z 2018/05/07 18:14:26 [INFO] Creating configmaps-manage
2018-05-07T18:14:26.005876000Z 2018/05/07 18:14:26 [INFO] Creating persistentvolumeclaims-manage
2018-05-07T18:14:26.009168000Z 2018/05/07 18:14:26 [INFO] Creating secrets-manage
2018-05-07T18:14:26.011349000Z 2018/05/07 18:14:26 [INFO] Creating persistentvolumeclaims-view
2018-05-07T18:14:26.014627000Z 2018/05/07 18:14:26 [INFO] Creating serviceaccounts-view
2018-05-07T18:14:26.016787000Z 2018/05/07 18:14:26 [INFO] Creating admin
2018-05-07T18:14:26.019324000Z 2018/05/07 18:14:26 [INFO] Creating edit
2018-05-07T18:14:26.021818000Z 2018/05/07 18:14:26 [INFO] Creating view
2018-05-07T18:14:26.027836000Z 2018/05/07 18:14:26 [INFO] Creating nodes-view
2018-05-07T18:14:26.030328000Z 2018/05/07 18:14:26 [INFO] Creating workloads-manage
2018-05-07T18:14:26.033195000Z 2018/05/07 18:14:26 [INFO] Creating nodes-manage
2018-05-07T18:14:26.036497000Z 2018/05/07 18:14:26 [INFO] Creating clusterroletemplatebindings-manage
2018-05-07T18:14:26.038507000Z 2018/05/07 18:14:26 [INFO] Creating services-view
2018-05-07T18:14:26.041594000Z 2018/05/07 18:14:26 [INFO] Creating cluster-owner
2018-05-07T18:14:26.044247000Z 2018/05/07 18:14:26 [INFO] Creating projects-create
2018-05-07T18:14:26.048165000Z 2018/05/07 18:14:26 [INFO] Creating projects-view
2018-05-07T18:14:26.050044000Z 2018/05/07 18:14:26 [INFO] Creating ingress-manage
2018-05-07T18:14:26.053529000Z 2018/05/07 18:14:26 [INFO] Creating services-manage
2018-05-07T18:14:26.172816000Z 2018/05/07 18:14:26 [INFO] Creating new GlobalRoleBinding for GlobalRoleBinding globalrolebinding-xkmc5
2018-05-07T18:14:26.187906000Z 2018/05/07 18:14:26 [INFO] Creating node driver amazonec2
2018-05-07T18:14:26.190763000Z 2018/05/07 18:14:26 [INFO] Creating node driver azure
2018-05-07T18:14:26.193791000Z 2018/05/07 18:14:26 [INFO] Creating node driver digitalocean
2018-05-07T18:14:26.201338000Z 2018/05/07 18:14:26 [INFO] Creating node driver exoscale
2018-05-07T18:14:26.206085000Z 2018/05/07 18:14:26 [INFO] Creating node driver openstack
2018-05-07T18:14:26.215517000Z 2018/05/07 18:14:26 [INFO] Creating node driver otc
2018-05-07T18:14:26.223840000Z 2018/05/07 18:14:26 [INFO] Creating node driver packet
2018-05-07T18:14:26.225215000Z 2018/05/07 18:14:26 [INFO] Creating node driver rackspace
2018-05-07T18:14:26.235761000Z 2018/05/07 18:14:26 [INFO] Creating node driver softlayer
2018-05-07T18:14:26.236268000Z 2018/05/07 18:14:26 [INFO] Creating node driver vmwarevsphere
2018-05-07T18:14:26.281769000Z 2018/05/07 18:14:26 [INFO] uploading azureConfig to node schema
2018-05-07T18:14:26.325989000Z 2018/05/07 18:14:26 [INFO] uploading azureConfig to node schema
2018-05-07T18:14:26.327613000Z 2018/05/07 18:14:26 [INFO] Listening on :443
2018-05-07T18:14:26.328043000Z 2018/05/07 18:14:26 [INFO] Listening on :80
2018-05-07T18:14:26.333331000Z 2018/05/07 18:14:26 [INFO] uploading digitaloceanConfig to node schema
2018-05-07T18:14:26.338131000Z 2018/05/07 18:14:26 [INFO] uploading digitaloceanConfig to node schema
2018-05-07T18:14:26.340609000Z 2018/05/07 18:14:26 [INFO] uploading amazonec2Config to node schema
2018-05-07T18:14:26.344688000Z 2018/05/07 18:14:26 [INFO] uploading amazonec2Config to node schema
2018-05-07T18:14:26.351869000Z 2018/05/07 18:14:26 [INFO] uploading vmwarevsphereConfig to node schema
2018-05-07T18:14:26.354015000Z 2018/05/07 18:14:26 [INFO] uploading vmwarevsphereConfig to node schema
agent logs
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f380a0f21089 rancher/hyperkube:v1.10.1-rancher2 "/opt/rke/entrypoi..." 55 seconds ago Up 54 seconds kube-proxy
cf76c651d30d rancher/hyperkube:v1.10.1-rancher2 "/opt/rke/entrypoi..." 2 minutes ago Up 44 seconds kubelet
3d4de68042a3 rancher/hyperkube:v1.10.1-rancher2 "/opt/rke/entrypoi..." 2 minutes ago Up 2 minutes kube-scheduler
98c9fecacae8 rancher/hyperkube:v1.10.1-rancher2 "/opt/rke/entrypoi..." 3 minutes ago Up 3 minutes kube-controller-manager
69ab9449b4a7 rancher/hyperkube:v1.10.1-rancher2 "/opt/rke/entrypoi..." 3 minutes ago Up 3 minutes kube-apiserver
948e219a5fa4 rancher/rke-tools:v0.1.4 "/bin/bash" 3 minutes ago Created service-sidekick
4013315db596 rancher/coreos-etcd:v3.1.12 "/usr/local/bin/et..." 3 minutes ago Up 3 minutes etcd
53146b7ce4d0 rancher/rke-tools:v0.1.4 "/bin/bash" 4 minutes ago Exited (0) 4 minutes ago cert-fetcher
1e98d36284cd rancher/rancher-agent:v2.0.0 "run.sh -- share-r..." 5 minutes ago Exited (137) About a minute ago share-mnt
ebbde085788c rancher/rancher-agent:v2.0.0 "run.sh --server h..." 5 minutes ago Up 5 minutes eloquent_agnesi
Log: f380a0f21089
Log: cf76c651d30d
Log: 3d4de68042a3
Log: 98c9fecacae8
Log: 69ab9449b4a7
Log: 948e219a5fa4
Log: 4013315db596
Log: 53146b7ce4d0
Log: 1e98d36284cd
2018-05-07T18:31:33.822600000Z Found container ID: 38fd38d8f308dd7d1ccc3af3e7223b1c530c332f5a0f7c258b6fc79d6c666c6b
2018-05-07T18:31:33.823000000Z Checking root: /host/run/runc
2018-05-07T18:31:33.824477000Z Checking file: 1e98d36284cdc22032bbe1b1d0c9bf0beb6bbad93b19af406f60e0b65c885b14
2018-05-07T18:31:33.825120000Z Checking file: 38fd38d8f308dd7d1ccc3af3e7223b1c530c332f5a0f7c258b6fc79d6c666c6b
2018-05-07T18:31:33.825855000Z Found state.json: 38fd38d8f308dd7d1ccc3af3e7223b1c530c332f5a0f7c258b6fc79d6c666c6b
2018-05-07T18:31:33.829394000Z time="2018-05-07T18:31:33Z" level=info msg="Execing [/usr/bin/nsenter --mount=/proc/26151/ns/mnt -F -- /var/lib/docker/devicemapper/mnt/f9d644b00f77c41c1eefa127d0143dd646a94d9fcd5aea8200ff6998c1148f74/rootfs/usr/bin/share-mnt --stage2 /var/lib/kubelet /var/lib/rancher -- norun]"
2018-05-07T18:33:48.170430000Z kubelet
2018-05-07T18:35:34.058287000Z Found container ID: d3eab9ef61b7304712112c575c4ec3560a7a8f01fdc1214671d72c19441ffd30
2018-05-07T18:35:34.058645000Z Checking root: /host/run/runc
2018-05-07T18:35:34.062697000Z Checking file: 1e98d36284cdc22032bbe1b1d0c9bf0beb6bbad93b19af406f60e0b65c885b14
2018-05-07T18:35:34.063097000Z Checking file: 3d4de68042a34f6dfc12e1972f8d354eeef007b2f901c55fe33a2542daba9869
2018-05-07T18:35:34.063447000Z Checking file: 4013315db596357727f90f707fed5376b5c11d3e0f87641da6377ae3c44ffb3f
2018-05-07T18:35:34.063826000Z Checking file: 69ab9449b4a7bf6d091dfc094518636eac385b61013f8d7601cc08bd04782e04
2018-05-07T18:35:34.064193000Z Checking file: 98c9fecacae8e11775e659625c8bb72d4f2efe25e47ba866ceac403475fe7b32
2018-05-07T18:35:34.064576000Z Checking file: cf76c651d30ddd41c2d2de2e0f7c203f980ac1cc076482688cf2a5118303e310
2018-05-07T18:35:34.065040000Z Checking file: d3eab9ef61b7304712112c575c4ec3560a7a8f01fdc1214671d72c19441ffd30
2018-05-07T18:35:34.065448000Z Found state.json: d3eab9ef61b7304712112c575c4ec3560a7a8f01fdc1214671d72c19441ffd30
2018-05-07T18:35:34.065940000Z time="2018-05-07T18:35:34Z" level=info msg="Execing [/usr/bin/nsenter --mount=/proc/30818/ns/mnt -F -- /var/lib/docker/devicemapper/mnt/854cf0ab09034c2a3acb51b99729d16e206a68bcec20b1c29e162c61d5cf1eea/rootfs/usr/bin/share-mnt --stage2 /var/lib/kubelet /var/lib/rancher -- norun]"
2018-05-07T18:35:34.570341000Z kubelet
Log: ebbde085788c
2018-05-07T18:31:12.431215000Z -----BEGIN CERTIFICATE-----
2018-05-07T18:31:12.431631000Z MIIC7jCCAdagAwIBAgIBADANBgkqhkiG9w0BAQsFADAoMRIwEAYDVQQKEwl0aGUt
2018-05-07T18:31:12.432003000Z cmFuY2gxEjAQBgNVBAMTCWNhdHRsZS1jYTAeFw0xODA1MDcxODE0MjVaFw0yODA1
2018-05-07T18:31:12.432352000Z MDQxODE0MjVaMCgxEjAQBgNVBAoTCXRoZS1yYW5jaDESMBAGA1UEAxMJY2F0dGxl
2018-05-07T18:31:12.432729000Z LWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtMFL30WLGM8f1Mp3
2018-05-07T18:31:12.433081000Z FCBtRpc5i84jsi8PhVy0m18Z79ysNvWZBoewmWTJAtpSBaLypnIkWIXyKys+ifzH
2018-05-07T18:31:12.433448000Z 4aejnhS373Gvhzjw/B0NCtqDJtTNWCN6aGf/JxOt2xDl6aJfBq8t/3CiaL9Chbl7
2018-05-07T18:31:12.433894000Z Yxes43gZBLGU7a0XCuLU8sCh1Yy5wMt2CrPQMMGQOLapUfJexwzOEbju7G1QWyxx
2018-05-07T18:31:12.434281000Z W8bYFGqgxvgYmEC7ln9hndDGSQwkd4NkUhYGI0N2+sTdLDQLJT2V4yo1tBVXzHqC
2018-05-07T18:31:12.434642000Z WqtAz7aGCQjpTlHAcUDzEF10ndw1TCFNlW+rHf2ttkgAnBBCBJtxz+52hTu7s1nc
2018-05-07T18:31:12.435035000Z XgTI/wIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAN
2018-05-07T18:31:12.435413000Z BgkqhkiG9w0BAQsFAAOCAQEAakCNBkPiT43flHEQ91OeSTUObNNS0vUdGYMt3zrM
2018-05-07T18:31:12.435788000Z hjgaWkt2qjeQABHUwb3husdzVuXmvy5rllEY5gTEbkQHaytwEoZGG4XySVV2t695
2018-05-07T18:31:12.436230000Z CylEt5Aq1WEWvsD71Db9A7hAibx2UEvqF5Kdu9POHfdhVfdehdPA10N/XNMB3kGW
2018-05-07T18:31:12.436599000Z N7dHTFmWPd77Gm3ZwnLRudaaFmsRix3UdbEa4dTN+CEDvKdfBRCqsMLxzECGmNfg
2018-05-07T18:31:12.436985000Z FukCmc+fcPEwmKWuzfPFYFWNHjWeHilFgge1H/1jjjW5sFeinhXKwM+0n+O38Xul
2018-05-07T18:31:12.437358000Z REBqCjl/FdXT18Nzf9BEpQI4P/mTZNg87yBwyptvoKNd+A==
2018-05-07T18:31:12.437723000Z -----END CERTIFICATE-----
Hi,
I had the same issue when trying to provision a cluster on vsphere, it seems to be working in v2.0.1-rc1.
Same issue here on AWS. I just followed the v2 quick start guide.
Same on vsphere here, healthcheck (manually) is "ok":
2018/05/15 19:14:12 [INFO] Provisioning cluster [c-rq6xb]
2018/05/15 19:14:12 [INFO] Creating cluster [c-rq6xb]
2018/05/15 19:14:13 [ERROR] Cluster c-rq6xb previously failed to create
2018/05/15 19:14:13 [INFO] cluster [c-rq6xb] provisioning: Building Kubernetes cluster
2018/05/15 19:14:13 [INFO] cluster [c-rq6xb] provisioning: [dialer] Setup tunnel for host [10.1.1.187]
2018/05/15 19:14:13 [INFO] cluster [c-rq6xb] provisioning: [dialer] Setup tunnel for host [10.1.1.55]
2018/05/15 19:14:13 [INFO] cluster [c-rq6xb] provisioning: [state] Found local kube config file, trying to get state from cluster
2018/05/15 19:14:13 [INFO] cluster [c-rq6xb] provisioning: [reconcile] Local config is not vaild, rebuilding admin config
2018/05/15 19:14:13 [INFO] cluster [c-rq6xb] provisioning: [reconcile] Rebuilding and updating local kube config
2018/05/15 19:14:13 [INFO] cluster [c-rq6xb] provisioning: Successfully Deployed local admin kubeconfig at [management-state/rke/rke-545066037/kube_config_cluster.yml]
2018/05/15 19:14:13 [INFO] cluster [c-rq6xb] provisioning: [state] Fetching cluster state from Kubernetes
2018-05-15 19:14:13.517887 I | mvcc: store.index: compact 14950
2018-05-15 19:14:13.665494 I | mvcc: finished scheduled compaction at 14950 (took 114.088174ms)
2018/05/15 19:14:43 [INFO] cluster [c-rq6xb] provisioning: Timed out waiting for kubernetes cluster to get state
2018/05/15 19:14:43 [INFO] cluster [c-rq6xb] provisioning: [network] Deploying port listener containers
2018/05/15 19:14:44 [INFO] cluster [c-rq6xb] provisioning: [network] Port listener containers deployed successfully
2018/05/15 19:14:44 [INFO] cluster [c-rq6xb] provisioning: [network] Running control plane -> etcd port checks
2018/05/15 19:14:45 [INFO] cluster [c-rq6xb] provisioning: [network] Successfully started [rke-port-checker] container on host [10.1.1.187]
2018/05/15 19:14:45 [INFO] cluster [c-rq6xb] provisioning: [network] Running control plane -> worker port checks
2018/05/15 19:14:46 [INFO] cluster [c-rq6xb] provisioning: [network] Successfully started [rke-port-checker] container on host [10.1.1.187]
2018/05/15 19:14:46 [INFO] cluster [c-rq6xb] provisioning: [network] Running workers -> control plane port checks
2018/05/15 19:14:46 [INFO] cluster [c-rq6xb] provisioning: [network] Removing port listener containers
2018/05/15 19:14:46 [INFO] cluster [c-rq6xb] provisioning: [remove/rke-etcd-port-listener] Successfully removed container on host [10.1.1.55]
2018/05/15 19:14:46 [INFO] cluster [c-rq6xb] provisioning: [remove/rke-cp-port-listener] Successfully removed container on host [10.1.1.187]
2018/05/15 19:14:46 [INFO] cluster [c-rq6xb] provisioning: [network] Port listener containers removed successfully
2018/05/15 19:14:46 [INFO] cluster [c-rq6xb] provisioning: [certificates] Attempting to recover certificates from backup on [etcd] hosts
2018/05/15 19:14:47 [INFO] cluster [c-rq6xb] provisioning: [certificates] Successfully started [cert-fetcher] container on host [10.1.1.55]
2018/05/15 19:14:52 [INFO] cluster [c-rq6xb] provisioning: [certificates] Certificate backup found on [etcd] hosts
2018/05/15 19:14:52 [INFO] cluster [c-rq6xb] provisioning: [reconcile] Rebuilding and updating local kube config
2018/05/15 19:14:52 [INFO] cluster [c-rq6xb] provisioning: Successfully Deployed local admin kubeconfig at [management-state/rke/rke-545066037/kube_config_cluster.yml]
2018/05/15 19:14:52 [INFO] cluster [c-rq6xb] provisioning: [reconcile] Reconciling cluster state
2018/05/15 19:14:52 [INFO] cluster [c-rq6xb] provisioning: [reconcile] This is newly generated cluster
2018/05/15 19:14:52 [INFO] cluster [c-rq6xb] provisioning: [certificates] Deploying kubernetes certificates to Cluster nodes
2018/05/15 19:14:58 [INFO] cluster [c-rq6xb] provisioning: Successfully Deployed local admin kubeconfig at [management-state/rke/rke-545066037/kube_config_cluster.yml]
2018/05/15 19:14:58 [INFO] cluster [c-rq6xb] provisioning: [certificates] Successfully deployed kubernetes certificates to Cluster nodes
2018/05/15 19:14:58 [INFO] cluster [c-rq6xb] provisioning: Pre-pulling kubernetes images
2018/05/15 19:14:58 [INFO] cluster [c-rq6xb] provisioning: Kubernetes images pulled successfully
2018/05/15 19:14:58 [INFO] cluster [c-rq6xb] provisioning: [etcd] Building up etcd plane..
2018/05/15 19:14:59 [INFO] cluster [c-rq6xb] provisioning: [etcd] Successfully started [rke-log-linker] container on host [10.1.1.55]
2018/05/15 19:14:59 [INFO] cluster [c-rq6xb] provisioning: [remove/rke-log-linker] Successfully removed container on host [10.1.1.55]
2018/05/15 19:14:59 [INFO] cluster [c-rq6xb] provisioning: [etcd] Successfully started etcd plane..
2018/05/15 19:14:59 [INFO] cluster [c-rq6xb] provisioning: [controlplane] Building up Controller Plane..
2018/05/15 19:14:59 [INFO] cluster [c-rq6xb] provisioning: [sidekick] Sidekick container already created on host [10.1.1.187]
2018/05/15 19:14:59 [INFO] cluster [c-rq6xb] provisioning: [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.1.1.187]
2018/05/15 19:15:49 [ERROR] cluster [c-rq6xb] provisioning: [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [10.1.1.187]: Get https://localhost:6443/healthz: can not build dialer to c-rq6xb:m-6h82z
just my 2 cents: As soon, as I leave out my custom CAs certificates, it works again.
And now, does not work at version "v2.0.1-rc1"
cmd to up: docker run -d -p 80:80 -p 443:443 rancher/rancher:v2.0.1-rc1
docker version
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-58.git87f2fab.el7.centos.x86_64
Go version: go1.9.4
Git commit: 87f2fab/1.13.1
Built: Fri May 11 14:30:13 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-58.git87f2fab.el7.centos.x86_64
Go version: go1.9.4
Git commit: 87f2fab/1.13.1
Built: Fri May 11 14:30:13 2018
OS/Arch: linux/amd64
Experimental: false
CEntos 7: Linux AL-LINUX01088 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Rancher/rancher log:
2018/05/16 18:17:09 [INFO] Listening on /tmp/log.sock
2018/05/16 18:17:09 [INFO] [certificates] Generating CA kubernetes certificates
2018/05/16 18:17:09 [INFO] [certificates] Generating Kubernetes API server certificates
2018/05/16 18:17:09 [INFO] [certificates] Generating Kube Controller certificates
2018/05/16 18:17:10 [INFO] [certificates] Generating Kube Scheduler certificates
2018/05/16 18:17:11 [INFO] [certificates] Generating Kube Proxy certificates
2018/05/16 18:17:11 [INFO] [certificates] Generating Node certificate
2018/05/16 18:17:12 [INFO] [certificates] Generating admin certificates and kubeconfig
2018/05/16 18:17:13 [INFO] [certificates] Generating etcd-127.0.0.1 certificate and key
2018/05/16 18:17:13 [INFO] Activating driver rke
2018/05/16 18:17:13 [INFO] Activating driver rke done
I0516 18:17:13.674065 1 http.go:105] HTTP2 has been explicitly disabled
2018/05/16 18:17:13 [INFO] Running etcd --peer-client-cert-auth --client-cert-auth --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem --key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --listen-client-urls=https://0.0.0.0:2379 --listen-peer-urls=https://0.0.0.0:2380 --initial-cluster=etcd-master=https://127.0.0.1:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --initial-cluster-token=etcd-cluster-1 --initial-advertise-peer-urls=https://127.0.0.1:2380 --data-dir=/var/lib/rancher/etcd --name=etcd-master --advertise-client-urls=https://127.0.0.1:2379,https://127.0.0.1:4001 --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem
2018-05-16 18:17:13.676004 I | etcdmain: etcd Version: 3.2.13
2018-05-16 18:17:13.676028 I | etcdmain: Git SHA: Not provided (use ./build instead of go build)
2018-05-16 18:17:13.676034 I | etcdmain: Go Version: go1.9.2
2018-05-16 18:17:13.676041 I | etcdmain: Go OS/Arch: linux/amd64
2018-05-16 18:17:13.676047 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2018-05-16 18:17:13.676148 I | embed: peerTLS: cert = /etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem, key = /etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem, ca = , trusted-ca = /etc/kubernetes/ssl/kube-ca.pem, client-cert-auth = true
2018-05-16 18:17:13.677063 I | embed: listening for peers on https://0.0.0.0:2380
2018-05-16 18:17:13.677179 I | embed: listening for client requests on 0.0.0.0:2379
2018/05/16 18:17:13 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018/05/16 18:17:13 [INFO] Running kube-apiserver --insecure-bind-address=127.0.0.1 --bind-address=127.0.0.1 --allow-privileged=true --storage-backend=etcd3 --service-cluster-ip-range=10.43.0.0/16 --authorization-mode=Node,RBAC --insecure-port=0 --secure-port=6443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --admission-control=ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds --service-account-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --cloud-provider= --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --apiserver-count=1 --advertise-address=10.43.0.1 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-servers=https://127.0.0.1:2379 --etcd-prefix=/registry -v=1 --logtostderr=false --alsologtostderr=false
2018/05/16 18:17:13 [INFO] Activating driver gke
2018/05/16 18:17:13 [INFO] Activating driver gke done
2018/05/16 18:17:13 [INFO] Activating driver aks
2018/05/16 18:17:13 [INFO] Activating driver aks done
2018/05/16 18:17:13 [INFO] Activating driver eks
2018/05/16 18:17:13 [INFO] Activating driver eks done
2018/05/16 18:17:13 [INFO] Activating driver import
2018/05/16 18:17:13 [INFO] Activating driver import done
2018/05/16 18:17:13 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-16 18:17:13.683906 I | etcdserver: name = etcd-master
2018-05-16 18:17:13.683932 I | etcdserver: data dir = /var/lib/rancher/etcd
2018-05-16 18:17:13.683941 I | etcdserver: member dir = /var/lib/rancher/etcd/member
2018-05-16 18:17:13.683946 I | etcdserver: heartbeat = 100ms
2018-05-16 18:17:13.683951 I | etcdserver: election = 1000ms
2018-05-16 18:17:13.683956 I | etcdserver: snapshot count = 100000
2018-05-16 18:17:13.683970 I | etcdserver: advertise client URLs = https://127.0.0.1:2379,https://127.0.0.1:4001
2018-05-16 18:17:13.683978 I | etcdserver: initial advertise peer URLs = https://127.0.0.1:2380
2018-05-16 18:17:13.683990 I | etcdserver: initial cluster = etcd-master=https://127.0.0.1:2380
2018/05/16 18:17:13 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018-05-16 18:17:13.702267 I | etcdserver: starting member e92d66acd89ecf29 in cluster 7581d6eb2d25405b
2018-05-16 18:17:13.702322 I | raft: e92d66acd89ecf29 became follower at term 0
2018-05-16 18:17:13.702358 I | raft: newRaft e92d66acd89ecf29 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-05-16 18:17:13.702372 I | raft: e92d66acd89ecf29 became follower at term 1
2018-05-16 18:17:13.718199 W | auth: simple token is not cryptographically signed
2018-05-16 18:17:13.739442 I | etcdserver: starting server... [version: 3.2.13, cluster version: to_be_decided]
2018-05-16 18:17:13.740243 I | embed: ClientTLS: cert = /etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem, key = /etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem, ca = , trusted-ca = /etc/kubernetes/ssl/kube-ca.pem, client-cert-auth = true
2018-05-16 18:17:13.740631 I | etcdserver/membership: added member e92d66acd89ecf29 [https://127.0.0.1:2380] to cluster 7581d6eb2d25405b
2018-05-16 18:17:14.602970 I | raft: e92d66acd89ecf29 is starting a new election at term 1
2018-05-16 18:17:14.603022 I | raft: e92d66acd89ecf29 became candidate at term 2
2018-05-16 18:17:14.603043 I | raft: e92d66acd89ecf29 received MsgVoteResp from e92d66acd89ecf29 at term 2
2018-05-16 18:17:14.603061 I | raft: e92d66acd89ecf29 became leader at term 2
2018-05-16 18:17:14.603071 I | raft: raft.node: e92d66acd89ecf29 elected leader e92d66acd89ecf29 at term 2
2018-05-16 18:17:14.603525 I | etcdserver: published {Name:etcd-master ClientURLs:[https://127.0.0.1:2379 https://127.0.0.1:4001]} to cluster 7581d6eb2d25405b
2018-05-16 18:17:14.603614 I | etcdserver: setting up the initial cluster version to 3.2
2018-05-16 18:17:14.603662 I | embed: ready to serve client requests
2018-05-16 18:17:14.603921 I | embed: serving client requests on [::]:2379
2018-05-16 18:17:14.606552 N | etcdserver/membership: set the initial cluster version to 3.2
2018-05-16 18:17:14.606616 I | etcdserver/api: enabled capabilities for version 3.2
[restful] 2018/05/16 18:17:15 log.go:33: [restful/swagger] listing is available at https://10.43.0.1:6443/swaggerapi
[restful] 2018/05/16 18:17:15 log.go:33: [restful/swagger] https://10.43.0.1:6443/swaggerui/ is mapped to folder /swagger-ui/
2018/05/16 18:17:15 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018/05/16 18:17:15 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018/05/16 18:17:15 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
[restful] 2018/05/16 18:17:15 log.go:33: [restful/swagger] listing is available at https://10.43.0.1:6443/swaggerapi
[restful] 2018/05/16 18:17:15 log.go:33: [restful/swagger] https://10.43.0.1:6443/swaggerui/ is mapped to folder /swagger-ui/
2018/05/16 18:17:17 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018/05/16 18:17:17 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018/05/16 18:17:17 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: getsockopt: connection refused
2018/05/16 18:17:19 [INFO] Running kube-scheduler --v=2 --address=0.0.0.0 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true -v=1 --logtostderr=false --alsologtostderr=false
2018/05/16 18:17:19 [INFO] Running kube-controller-manager --leader-elect=true --node-monitor-grace-period=40s --v=2 --cloud-provider= --allow-untagged-cloud=true --enable-hostpath-provisioner=false --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --address=0.0.0.0 --configure-cloud-routes=false --cluster-cidr=10.42.0.0/16 --pod-eviction-timeout=5m0s --allocate-node-cidrs=true --service-cluster-ip-range=10.43.0.0/16 --service-account-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --use-service-account-credentials=true -v=1 --logtostderr=false --alsologtostderr=false
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances
E0516 18:17:19.725373 1 server.go:173] unable to register configz: register config "componentconfig" twice
2018/05/16 18:17:19 [INFO] Creating CRD authconfigs.management.cattle.io
2018/05/16 18:17:19 [INFO] Creating CRD apps.project.cattle.io
2018/05/16 18:17:19 [INFO] Creating CRD clusteralerts.management.cattle.io
E0516 18:17:19.755305 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0516 18:17:19.755501 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:103: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0516 18:17:19.757786 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0516 18:17:19.757978 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0516 18:17:19.759457 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get endpoints in the namespace "kube-system"
E0516 18:17:19.760110 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0516 18:17:19.760285 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0516 18:17:19.761188 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0516 18:17:19.761287 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
2018/05/16 18:17:19 [INFO] Creating CRD projectalerts.management.cattle.io
2018/05/16 18:17:19 [INFO] Creating CRD apprevisions.project.cattle.io
2018/05/16 18:17:19 [INFO] Creating CRD notifiers.management.cattle.io
2018/05/16 18:17:19 [INFO] Creating CRD namespacecomposeconfigs.project.cattle.io
2018/05/16 18:17:19 [INFO] Creating CRD catalogs.management.cattle.io
2018/05/16 18:17:19 [INFO] Creating CRD clusterevents.management.cattle.io
2018/05/16 18:17:19 [INFO] Waiting for CRD namespacecomposeconfigs.project.cattle.io to become available
2018/05/16 18:17:19 [INFO] Creating CRD clusterloggings.management.cattle.io
2018/05/16 18:17:20 [INFO] Creating CRD clusterregistrationtokens.management.cattle.io
2018/05/16 18:17:20 [INFO] Creating CRD clusterroletemplatebindings.management.cattle.io
2018/05/16 18:17:20 [INFO] Creating CRD clusters.management.cattle.io
2018/05/16 18:17:20 [INFO] Done waiting for CRD namespacecomposeconfigs.project.cattle.io to become available
2018/05/16 18:17:20 [INFO] Waiting for CRD apprevisions.project.cattle.io to become available
E0516 18:17:20.756605 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0516 18:17:20.759000 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:103: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0516 18:17:20.769213 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0516 18:17:20.770972 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0516 18:17:20.772949 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0516 18:17:20.774642 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0516 18:17:20.780486 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0516 18:17:20.782060 1 reflector.go:205] github.com/rancher/rancher/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
2018/05/16 18:17:20 [INFO] Creating CRD clustercomposeconfigs.management.cattle.io
2018/05/16 18:17:21 [INFO] Creating CRD globalcomposeconfigs.management.cattle.io
2018/05/16 18:17:21 [INFO] Creating CRD dynamicschemas.management.cattle.io
2018/05/16 18:17:21 [INFO] Done waiting for CRD apprevisions.project.cattle.io to become available
2018/05/16 18:17:21 [INFO] Creating CRD globalrolebindings.management.cattle.io
E0516 18:17:21.876444 1 controllermanager.go:384] Server isn't healthy yet. Waiting a little while.
2018/05/16 18:17:21 [INFO] Creating CRD globalroles.management.cattle.io
2018/05/16 18:17:22 [INFO] Creating CRD groupmembers.management.cattle.io
2018/05/16 18:17:22 [INFO] Creating CRD groups.management.cattle.io
2018/05/16 18:17:22 [INFO] Creating CRD listenconfigs.management.cattle.io
2018/05/16 18:17:22 [INFO] Creating CRD nodes.management.cattle.io
2018/05/16 18:17:22 [INFO] Creating CRD nodepools.management.cattle.io
E0516 18:17:23.032171 1 core.go:70] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
2018/05/16 18:17:23 [INFO] Creating CRD nodedrivers.management.cattle.io
E0516 18:17:23.167644 1 certificates.go:48] Failed to start certificate controller: error reading CA cert file "/etc/kubernetes/ca/ca.pem": open /etc/kubernetes/ca/ca.pem: no such file or directory
2018/05/16 18:17:23 [INFO] Creating CRD nodetemplates.management.cattle.io
2018/05/16 18:17:23 [INFO] Creating CRD podsecuritypolicytemplates.management.cattle.io
2018/05/16 18:17:23 [INFO] Creating CRD podsecuritypolicytemplateprojectbindings.management.cattle.io
2018/05/16 18:17:23 [INFO] Creating CRD preferences.management.cattle.io
2018/05/16 18:17:24 [INFO] Creating CRD projectloggings.management.cattle.io
2018/05/16 18:17:24 [INFO] Creating CRD projectnetworkpolicies.management.cattle.io
2018/05/16 18:17:24 [INFO] Creating CRD projectroletemplatebindings.management.cattle.io
2018/05/16 18:17:24 [INFO] Creating CRD projects.management.cattle.io
2018/05/16 18:17:24 [INFO] Creating CRD roletemplates.management.cattle.io
2018/05/16 18:17:25 [INFO] Creating CRD settings.management.cattle.io
2018/05/16 18:17:25 [INFO] Creating CRD templates.management.cattle.io
2018/05/16 18:17:25 [INFO] Creating CRD templateversions.management.cattle.io
2018/05/16 18:17:25 [INFO] Creating CRD templatecontents.management.cattle.io
2018/05/16 18:17:25 [INFO] Creating CRD clusterpipelines.management.cattle.io
2018/05/16 18:17:26 [INFO] Creating CRD pipelines.management.cattle.io
2018/05/16 18:17:26 [INFO] Creating CRD pipelineexecutions.management.cattle.io
2018/05/16 18:17:26 [INFO] Creating CRD pipelineexecutionlogs.management.cattle.io
2018/05/16 18:17:26 [INFO] Creating CRD sourcecodecredentials.management.cattle.io
2018/05/16 18:17:26 [INFO] Creating CRD sourcecoderepositories.management.cattle.io
2018/05/16 18:17:27 [INFO] Creating CRD tokens.management.cattle.io
2018/05/16 18:17:27 [INFO] Creating CRD users.management.cattle.io
2018/05/16 18:17:27 [INFO] Starting API controllers
2018/05/16 18:17:27 [INFO] Syncing SecretController Controller
2018/05/16 18:17:27 [INFO] Syncing RoleController Controller
2018/05/16 18:17:27 [INFO] Syncing RoleBindingController Controller
2018/05/16 18:17:27 [INFO] Syncing ClusterRoleController Controller
2018/05/16 18:17:27 [INFO] Syncing ListenConfigController Controller
2018/05/16 18:17:27 [INFO] Syncing ClusterRegistrationTokenController Controller
2018/05/16 18:17:27 [INFO] Syncing ClusterController Controller
2018/05/16 18:17:27 [INFO] Syncing NodeController Controller
2018/05/16 18:17:27 [INFO] Syncing NodeController Controller
2018/05/16 18:17:27 [INFO] Syncing UserController Controller
2018/05/16 18:17:27 [INFO] Syncing ClusterRoleTemplateBindingController Controller
2018/05/16 18:17:27 [INFO] Syncing ProjectRoleTemplateBindingController Controller
2018/05/16 18:17:27 [INFO] Syncing TokenController Controller
2018/05/16 18:17:27 [INFO] Syncing ClusterPipelineController Controller
2018/05/16 18:17:27 [INFO] Syncing SourceCodeCredentialController Controller
2018/05/16 18:17:27 [INFO] Syncing SourceCodeRepositoryController Controller
2018/05/16 18:17:27 [INFO] Syncing PipelineController Controller
2018/05/16 18:17:27 [INFO] Syncing ProjectController Controller
2018/05/16 18:17:27 [INFO] Syncing GroupMemberController Controller
2018/05/16 18:17:27 [INFO] Syncing GroupController Controller
2018/05/16 18:17:27 [INFO] Syncing AuthConfigController Controller
2018/05/16 18:17:27 [INFO] Syncing DynamicSchemaController Controller
2018/05/16 18:17:27 [INFO] Syncing NodeDriverController Controller
2018/05/16 18:17:27 [INFO] Syncing SettingController Controller
2018/05/16 18:17:27 [INFO] Syncing ClusterRoleBindingController Controller
2018/05/16 18:17:27 [INFO] Syncing SecretController Controller Done
2018/05/16 18:17:27 [INFO] Syncing RoleController Controller Done
2018/05/16 18:17:27 [INFO] Syncing RoleBindingController Controller Done
2018/05/16 18:17:27 [INFO] Syncing ListenConfigController Controller Done
2018/05/16 18:17:27 [INFO] Syncing PipelineController Controller Done
2018/05/16 18:17:27 [INFO] Syncing SettingController Controller Done
2018/05/16 18:17:27 [INFO] Syncing ClusterRoleBindingController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterRoleController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterRegistrationTokenController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ProjectController Controller Done
2018/05/16 18:17:28 [INFO] Syncing GroupMemberController Controller Done
2018/05/16 18:17:28 [INFO] Syncing GroupController Controller Done
2018/05/16 18:17:28 [INFO] Syncing DynamicSchemaController Controller Done
2018/05/16 18:17:28 [INFO] Syncing NodeDriverController Controller Done
2018/05/16 18:17:28 [INFO] Syncing NodeController Controller Done
2018/05/16 18:17:28 [INFO] Syncing UserController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterController Controller Done
2018/05/16 18:17:28 [INFO] Syncing NodeController Controller Done
2018/05/16 18:17:28 [INFO] Syncing SourceCodeCredentialController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterRoleTemplateBindingController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ProjectRoleTemplateBindingController Controller Done
2018/05/16 18:17:28 [INFO] Syncing TokenController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterPipelineController Controller Done
2018/05/16 18:17:28 [INFO] Syncing SourceCodeRepositoryController Controller Done
2018/05/16 18:17:28 [INFO] Syncing AuthConfigController Controller Done
2018/05/16 18:17:28 [INFO] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"cattle-controllers", UID:"64071368-5935-11e8-8e37-0242ac110002", APIVersion:"v1", ResourceVersion:"277", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 6a1aeed2cb39 became leader
2018/05/16 18:17:28 [INFO] Starting catalog controller
2018/05/16 18:17:28 [INFO] Starting management controllers
2018/05/16 18:17:28 [INFO] Syncing SecretController Controller
2018/05/16 18:17:28 [INFO] Syncing RoleController Controller
2018/05/16 18:17:28 [INFO] Syncing NamespaceController Controller
2018/05/16 18:17:28 [INFO] Syncing ClusterRoleBindingController Controller
2018/05/16 18:17:28 [INFO] Syncing PodSecurityPolicyTemplateProjectBindingController Controller
2018/05/16 18:17:28 [INFO] Syncing RoleBindingController Controller
2018/05/16 18:17:28 [INFO] Syncing ProjectController Controller
2018/05/16 18:17:28 [INFO] Syncing RoleTemplateController Controller
2018/05/16 18:17:28 [INFO] Syncing ClusterRoleController Controller
2018/05/16 18:17:28 [INFO] Syncing ClusterController Controller
2018/05/16 18:17:28 [INFO] Syncing GlobalRoleController Controller
2018/05/16 18:17:28 [INFO] Syncing ProjectRoleTemplateBindingController Controller
2018/05/16 18:17:28 [INFO] Syncing ClusterRoleTemplateBindingController Controller
2018/05/16 18:17:28 [INFO] Syncing GlobalRoleBindingController Controller
2018/05/16 18:17:28 [INFO] Syncing TokenController Controller
2018/05/16 18:17:28 [INFO] Syncing UserController Controller
2018/05/16 18:17:28 [INFO] Syncing CatalogController Controller
2018/05/16 18:17:28 [INFO] Syncing NodeController Controller
2018/05/16 18:17:28 [INFO] Syncing ClusterEventController Controller
2018/05/16 18:17:28 [INFO] Syncing ProjectAlertController Controller
2018/05/16 18:17:28 [INFO] Syncing PodSecurityPolicyTemplateController Controller
2018/05/16 18:17:28 [INFO] Syncing GlobalComposeConfigController Controller
2018/05/16 18:17:28 [INFO] Syncing DynamicSchemaController Controller
2018/05/16 18:17:28 [INFO] Syncing NodeDriverController Controller
2018/05/16 18:17:28 [INFO] Syncing NodePoolController Controller
2018/05/16 18:17:28 [INFO] Syncing SecretController Controller Done
2018/05/16 18:17:28 [INFO] Syncing RoleController Controller Done
2018/05/16 18:17:28 [INFO] Syncing NamespaceController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterRoleBindingController Controller Done
2018/05/16 18:17:28 [INFO] Syncing RoleBindingController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ProjectController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterRoleController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ProjectRoleTemplateBindingController Controller Done
2018/05/16 18:17:28 [INFO] Syncing PodSecurityPolicyTemplateProjectBindingController Controller Done
2018/05/16 18:17:28 [INFO] Syncing RoleTemplateController Controller Done
2018/05/16 18:17:28 [INFO] Syncing GlobalRoleController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterRoleTemplateBindingController Controller Done
2018/05/16 18:17:28 [INFO] Syncing TokenController Controller Done
2018/05/16 18:17:28 [INFO] Syncing UserController Controller Done
2018/05/16 18:17:28 [INFO] Syncing NodeController Controller Done
2018/05/16 18:17:28 [INFO] Syncing DynamicSchemaController Controller Done
2018/05/16 18:17:28 [INFO] Syncing NodeDriverController Controller Done
2018/05/16 18:17:28 [INFO] Syncing GlobalRoleBindingController Controller Done
2018/05/16 18:17:28 [INFO] Syncing CatalogController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ClusterEventController Controller Done
2018/05/16 18:17:28 [INFO] Syncing ProjectAlertController Controller Done
2018/05/16 18:17:28 [INFO] Syncing PodSecurityPolicyTemplateController Controller Done
2018/05/16 18:17:28 [INFO] Syncing GlobalComposeConfigController Controller Done
2018/05/16 18:17:28 [INFO] Syncing NodePoolController Controller Done
2018/05/16 18:17:29 [INFO] Reconciling GlobalRoles
2018/05/16 18:17:29 [INFO] Listening on :443
2018/05/16 18:17:29 [INFO] Listening on :80
2018/05/16 18:17:29 [INFO] Creating nodedrivers-manage
2018/05/16 18:17:29 [INFO] Creating catalogs-use
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-nodedrivers-manage for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating users-manage
2018/05/16 18:17:29 [INFO] Creating roles-manage
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-catalogs-use for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-users-manage for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating authn-manage
2018/05/16 18:17:29 [INFO] Creating podsecuritypolicytemplates-manage
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-roles-manage for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating clusters-create
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-authn-manage for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating catalogs-manage
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-podsecuritypolicytemplates-manage for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating settings-manage
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-clusters-create for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-catalogs-manage for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating admin
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-settings-manage for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating user
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-admin for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating user-base
2018/05/16 18:17:29 [INFO] Reconciling RoleTemplates
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-user for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating new ClusterRole cattle-globalrole-user-base for corresponding GlobalRole
2018/05/16 18:17:29 [INFO] Creating serviceaccounts-view
2018/05/16 18:17:29 [INFO] Creating projectroletemplatebindings-manage
2018/05/16 18:17:29 [INFO] Creating projectroletemplatebindings-view
2018/05/16 18:17:29 [INFO] Creating secrets-view
2018/05/16 18:17:29 [INFO] Creating persistentvolumeclaims-manage
2018/05/16 18:17:29 [INFO] Creating serviceaccounts-manage
2018/05/16 18:17:29 [INFO] Creating services-view
2018/05/16 18:17:29 [INFO] Creating secrets-manage
2018/05/16 18:17:29 [INFO] Creating projects-view
2018/05/16 18:17:29 [INFO] Creating project-owner
2018/05/16 18:17:29 [INFO] Creating project-member
2018/05/16 18:17:29 [INFO] Creating storage-manage
2018/05/16 18:17:29 [INFO] Creating workloads-manage
2018/05/16 18:17:29 [INFO] Creating ingress-manage
2018/05/16 18:17:29 [INFO] Creating configmaps-manage
2018/05/16 18:17:29 [INFO] Creating nodes-manage
2018/05/16 18:17:29 [INFO] Creating nodes-view
2018/05/16 18:17:29 [INFO] Creating persistentvolumeclaims-view
2018/05/16 18:17:29 [INFO] Creating projects-create
2018/05/16 18:17:29 [INFO] Creating workloads-view
2018/05/16 18:17:29 [INFO] Creating ingress-view
2018/05/16 18:17:29 [INFO] Creating services-manage
2018/05/16 18:17:29 [INFO] Creating edit
2018/05/16 18:17:29 [INFO] Creating cluster-owner
2018/05/16 18:17:29 [INFO] Creating cluster-member
2018/05/16 18:17:29 [INFO] Creating read-only
2018/05/16 18:17:29 [INFO] Creating create-ns
2018/05/16 18:17:29 [INFO] Creating cluster-admin
2018/05/16 18:17:29 [INFO] Creating admin
2018/05/16 18:17:29 [INFO] Creating clusterroletemplatebindings-manage
2018/05/16 18:17:29 [INFO] Creating view
2018/05/16 18:17:29 [INFO] Creating clusterroletemplatebindings-view
2018/05/16 18:17:29 [INFO] Creating configmaps-view
2018/05/16 18:17:29 [INFO] Creating new GlobalRoleBinding for GlobalRoleBinding globalrolebinding-r6gqt
2018/05/16 18:17:29 [INFO] Creating node driver amazonec2
2018/05/16 18:17:29 [INFO] Creating node driver azure
2018/05/16 18:17:29 [INFO] Creating node driver digitalocean
2018/05/16 18:17:29 [INFO] Creating node driver exoscale
2018/05/16 18:17:29 [INFO] Creating node driver openstack
2018/05/16 18:17:29 [INFO] Creating node driver otc
2018/05/16 18:17:29 [INFO] Creating node driver packet
2018/05/16 18:17:29 [INFO] Creating node driver rackspace
2018/05/16 18:17:29 [INFO] Creating node driver softlayer
2018/05/16 18:17:29 [INFO] Creating node driver aliyunecs
E0516 18:17:29.669799 1 generic_controller.go:204] NodeDriverController exoscale [node-driver-controller] failed with : dynamicschemas.management.cattle.io "nodetemplateconfig" already exists
2018/05/16 18:17:29 [INFO] Creating node driver vmwarevsphere
2018/05/16 18:17:29 [INFO] uploading azureConfig to node schema
2018/05/16 18:17:29 [INFO] uploading azureConfig to node schema
2018/05/16 18:17:29 [INFO] uploading digitaloceanConfig to node schema
2018/05/16 18:17:29 [INFO] uploading digitaloceanConfig to node schema
2018/05/16 18:17:29 [INFO] uploading amazonec2Config to node schema
2018/05/16 18:17:29 [INFO] uploading amazonec2Config to node schema
2018/05/16 18:17:29 [INFO] uploading vmwarevsphereConfig to node schema
2018/05/16 18:17:29 [INFO] uploading vmwarevsphereConfig to node schema
E0516 18:18:37.512330 1 generic_controller.go:204] ClusterRoleTemplateBindingController c-5lfbg/creator [mgmt-auth-crtb-controller] failed with : couldn't create role cluster-owner: roles.rbac.authorization.k8s.io "cluster-owner" already exists
E0516 18:22:16.553002 1 generic_controller.go:204] CatalogController library [catalog] failed with : Get https://git.rancher.io/charts/index.yaml: dial tcp 52.33.59.17:443: i/o timeout
2018/05/16 18:26:21 [INFO] Provisioning cluster [c-5lfbg]
2018/05/16 18:26:21 [INFO] Creating cluster [c-5lfbg]
2018/05/16 18:26:21 [INFO] cluster [c-5lfbg] provisioning: Building Kubernetes cluster
2018/05/16 18:26:21 [INFO] cluster [c-5lfbg] provisioning: [dialer] Setup tunnel for host [172.27.3.28]
2018/05/16 18:26:21 [ERROR] cluster [c-5lfbg] provisioning: Failed to set up SSH tunneling for host [172.27.3.28]: Can't establish dialer connection: can not build dialer to c-5lfbg:m-3ab207288eb1
2018/05/16 18:26:21 [ERROR] cluster [c-5lfbg] provisioning: Removing host [172.27.3.28] from node lists
2018/05/16 18:26:21 [ERROR] cluster [c-5lfbg] provisioning: Cluster must have at least one etcd plane host
2018/05/16 18:26:31 [INFO] Handling backend connection request [m-3ab207288eb1]
2018/05/16 18:26:51 [INFO] Provisioning cluster [c-5lfbg]
2018/05/16 18:26:51 [INFO] Creating cluster [c-5lfbg]
2018/05/16 18:26:51 [ERROR] Cluster c-5lfbg previously failed to create
2018/05/16 18:26:51 [INFO] cluster [c-5lfbg] provisioning: Building Kubernetes cluster
2018/05/16 18:26:51 [INFO] cluster [c-5lfbg] provisioning: [dialer] Setup tunnel for host [172.27.3.28]
2018/05/16 18:26:51 [INFO] cluster [c-5lfbg] provisioning: [network] Deploying port listener containers
2018/05/16 18:26:51 [INFO] cluster [c-5lfbg] provisioning: [network] Pulling image [rancher/rke-tools:v0.1.6] on host [172.27.3.28]
2018/05/16 18:27:00 [INFO] cluster [c-5lfbg] provisioning: [network] Successfully pulled image [rancher/rke-tools:v0.1.6] on host [172.27.3.28]
2018/05/16 18:27:01 [INFO] cluster [c-5lfbg] provisioning: [network] Successfully started [rke-etcd-port-listener] container on host [172.27.3.28]
2018/05/16 18:27:02 [INFO] cluster [c-5lfbg] provisioning: [network] Successfully started [rke-cp-port-listener] container on host [172.27.3.28]
2018/05/16 18:27:02 [INFO] cluster [c-5lfbg] provisioning: [network] Port listener containers deployed successfully
2018/05/16 18:27:02 [INFO] cluster [c-5lfbg] provisioning: [network] Running control plane -> etcd port checks
2018/05/16 18:27:03 [INFO] cluster [c-5lfbg] provisioning: [network] Successfully started [rke-port-checker] container on host [172.27.3.28]
E0516 18:27:03.408539 1 generic_controller.go:204] CatalogController library [catalog] failed with : Get https://git.rancher.io/charts/index.yaml: dial tcp 35.160.139.247:443: i/o timeout
2018/05/16 18:27:04 [INFO] cluster [c-5lfbg] provisioning: [network] Running control plane -> worker port checks
2018/05/16 18:27:04 [INFO] cluster [c-5lfbg] provisioning: [network] Successfully started [rke-port-checker] container on host [172.27.3.28]
2018/05/16 18:27:05 [INFO] cluster [c-5lfbg] provisioning: [network] Running workers -> control plane port checks
2018/05/16 18:27:05 [INFO] cluster [c-5lfbg] provisioning: [network] Skipping kubeapi port check
2018/05/16 18:27:05 [INFO] cluster [c-5lfbg] provisioning: [network] Removing port listener containers
2018/05/16 18:27:06 [INFO] cluster [c-5lfbg] provisioning: [remove/rke-etcd-port-listener] Successfully removed container on host [172.27.3.28]
2018/05/16 18:27:07 [INFO] cluster [c-5lfbg] provisioning: [remove/rke-cp-port-listener] Successfully removed container on host [172.27.3.28]
2018/05/16 18:27:07 [INFO] cluster [c-5lfbg] provisioning: [network] Port listener containers removed successfully
2018/05/16 18:27:07 [INFO] cluster [c-5lfbg] provisioning: [certificates] Attempting to recover certificates from backup on [etcd] hosts
2018/05/16 18:27:08 [INFO] cluster [c-5lfbg] provisioning: [certificates] Successfully started [cert-fetcher] container on host [172.27.3.28]
2018/05/16 18:27:08 [INFO] cluster [c-5lfbg] provisioning: [certificates] No Certificate backup found on [etcd] hosts
2018/05/16 18:27:08 [INFO] cluster [c-5lfbg] provisioning: [certificates] Generating CA kubernetes certificates
2018/05/16 18:27:09 [INFO] cluster [c-5lfbg] provisioning: [certificates] Generating Kubernetes API server certificates
2018/05/16 18:27:09 [INFO] cluster [c-5lfbg] provisioning: [certificates] Generating Kube Controller certificates
2018/05/16 18:27:10 [INFO] cluster [c-5lfbg] provisioning: [certificates] Generating Kube Scheduler certificates
2018/05/16 18:27:11 [INFO] cluster [c-5lfbg] provisioning: [certificates] Generating Kube Proxy certificates
2018/05/16 18:27:11 [INFO] cluster [c-5lfbg] provisioning: [certificates] Generating Node certificate
2018/05/16 18:27:13 [INFO] cluster [c-5lfbg] provisioning: [certificates] Generating admin certificates and kubeconfig
2018/05/16 18:27:13 [INFO] cluster [c-5lfbg] provisioning: [certificates] Generating etcd-172.27.3.28 certificate and key
2018/05/16 18:27:14 [INFO] cluster [c-5lfbg] provisioning: [certificates] Temporarily saving certs to [etcd] hosts
2018-05-16 18:27:14.767802 I | mvcc: store.index: compact 984
2018-05-16 18:27:14.769731 I | mvcc: finished scheduled compaction at 984 (took 1.393375ms)
2018/05/16 18:27:20 [INFO] cluster [c-5lfbg] provisioning: [certificates] Saved certs to [etcd] hosts
2018/05/16 18:27:20 [INFO] cluster [c-5lfbg] provisioning: [reconcile] Reconciling cluster state
2018/05/16 18:27:20 [INFO] cluster [c-5lfbg] provisioning: [reconcile] This is newly generated cluster
2018/05/16 18:27:20 [INFO] cluster [c-5lfbg] provisioning: [certificates] Deploying kubernetes certificates to Cluster nodes
2018/05/16 18:27:27 [INFO] cluster [c-5lfbg] provisioning: Successfully Deployed local admin kubeconfig at [management-state/rke/rke-211856388/kube_config_cluster.yml]
2018/05/16 18:27:27 [INFO] cluster [c-5lfbg] provisioning: [certificates] Successfully deployed kubernetes certificates to Cluster nodes
2018/05/16 18:27:27 [INFO] cluster [c-5lfbg] provisioning: Pre-pulling kubernetes images
2018/05/16 18:27:27 [INFO] cluster [c-5lfbg] provisioning: [pre-deploy] Pulling image [rancher/hyperkube:v1.10.1-rancher2] on host [172.27.3.28]
2018/05/16 18:28:25 [INFO] cluster [c-5lfbg] provisioning: [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.10.1-rancher2] on host [172.27.3.28]
2018/05/16 18:28:25 [INFO] cluster [c-5lfbg] provisioning: Kubernetes images pulled successfully
2018/05/16 18:28:25 [INFO] cluster [c-5lfbg] provisioning: [etcd] Building up etcd plane..
2018/05/16 18:28:25 [INFO] cluster [c-5lfbg] provisioning: [etcd] Pulling image [rancher/coreos-etcd:v3.1.12] on host [172.27.3.28]
2018/05/16 18:28:31 [INFO] cluster [c-5lfbg] provisioning: [etcd] Successfully pulled image [rancher/coreos-etcd:v3.1.12] on host [172.27.3.28]
2018/05/16 18:28:32 [INFO] cluster [c-5lfbg] provisioning: [etcd] Successfully started [etcd] container on host [172.27.3.28]
2018/05/16 18:28:33 [INFO] cluster [c-5lfbg] provisioning: [etcd] Successfully started [rke-log-linker] container on host [172.27.3.28]
2018/05/16 18:28:34 [INFO] cluster [c-5lfbg] provisioning: [remove/rke-log-linker] Successfully removed container on host [172.27.3.28]
2018/05/16 18:28:34 [INFO] cluster [c-5lfbg] provisioning: [etcd] Successfully started etcd plane..
2018/05/16 18:28:34 [INFO] cluster [c-5lfbg] provisioning: [controlplane] Building up Controller Plane..
2018/05/16 18:28:35 [INFO] cluster [c-5lfbg] provisioning: [controlplane] Successfully started [kube-apiserver] container on host [172.27.3.28]
2018/05/16 18:28:35 [INFO] cluster [c-5lfbg] provisioning: [healthcheck] Start Healthcheck on service [kube-apiserver] on host [172.27.3.28]
2018/05/16 18:29:28 [INFO] Handling backend connection request [m-8bd77d203e6b]
2018/05/16 18:29:29 [ERROR] cluster [c-5lfbg] provisioning: [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Service [kube-apiserver] is not healthy on host [172.27.3.28]. Response code: [403], response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"kube-apiserver\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
2018/05/16 18:29:29 [INFO] Provisioning cluster [c-5lfbg]
2018/05/16 18:29:29 [INFO] Creating cluster [c-5lfbg]
2018/05/16 18:29:29 [ERROR] Cluster c-5lfbg previously failed to create
2018/05/16 18:29:29 [INFO] cluster [c-5lfbg] provisioning: Building Kubernetes cluster
2018/05/16 18:29:29 [INFO] cluster [c-5lfbg] provisioning: [dialer] Setup tunnel for host [172.27.3.28]
2018/05/16 18:29:29 [INFO] cluster [c-5lfbg] provisioning: [state] Found local kube config file, trying to get state from cluster
2018/05/16 18:29:29 [INFO] cluster [c-5lfbg] provisioning: [state] Fetching cluster state from Kubernetes
2018/05/16 18:29:59 [INFO] cluster [c-5lfbg] provisioning: Timed out waiting for kubernetes cluster to get state
2018/05/16 18:29:59 [INFO] cluster [c-5lfbg] provisioning: [network] Deploying port listener containers
2018/05/16 18:30:00 [INFO] cluster [c-5lfbg] provisioning: [network] Successfully started [rke-etcd-port-listener] container on host [172.27.3.28]
2018/05/16 18:30:01 [INFO] cluster [c-5lfbg] provisioning: [network] Port listener containers deployed successfully
2018/05/16 18:30:01 [INFO] cluster [c-5lfbg] provisioning: [network] Running control plane -> etcd port checks
2018/05/16 18:30:02 [INFO] cluster [c-5lfbg] provisioning: [network] Successfully started [rke-port-checker] container on host [172.27.3.28]
2018/05/16 18:30:03 [INFO] cluster [c-5lfbg] provisioning: [network] Running control plane -> worker port checks
2018/05/16 18:30:04 [INFO] cluster [c-5lfbg] provisioning: [network] Successfully started [rke-port-checker] container on host [172.27.3.28]
2018/05/16 18:30:04 [INFO] cluster [c-5lfbg] provisioning: [network] Running workers -> control plane port checks
2018/05/16 18:30:04 [INFO] cluster [c-5lfbg] provisioning: [network] Skipping kubeapi port check
2018/05/16 18:30:04 [INFO] cluster [c-5lfbg] provisioning: [network] Removing port listener containers
2018/05/16 18:30:05 [INFO] cluster [c-5lfbg] provisioning: [remove/rke-etcd-port-listener] Successfully removed container on host [172.27.3.28]
2018/05/16 18:30:06 [INFO] cluster [c-5lfbg] provisioning: [remove/rke-cp-port-listener] Successfully removed container on host [172.27.3.28]
2018/05/16 18:30:06 [INFO] cluster [c-5lfbg] provisioning: [network] Port listener containers removed successfully
2018/05/16 18:30:06 [INFO] cluster [c-5lfbg] provisioning: [certificates] Attempting to recover certificates from backup on [etcd] hosts
2018/05/16 18:30:09 [INFO] cluster [c-5lfbg] provisioning: [certificates] Certificate backup found on [etcd] hosts
2018/05/16 18:30:09 [INFO] cluster [c-5lfbg] provisioning: [reconcile] Rebuilding and updating local kube config
2018/05/16 18:30:09 [INFO] cluster [c-5lfbg] provisioning: Successfully Deployed local admin kubeconfig at [management-state/rke/rke-186836371/kube_config_cluster.yml]
2018/05/16 18:30:09 [INFO] cluster [c-5lfbg] provisioning: [reconcile] host [172.27.3.28] is active master on the cluster
2018/05/16 18:30:09 [INFO] cluster [c-5lfbg] provisioning: [reconcile] Reconciling cluster state
2018/05/16 18:30:09 [INFO] cluster [c-5lfbg] provisioning: [reconcile] This is newly generated cluster
2018/05/16 18:30:09 [INFO] cluster [c-5lfbg] provisioning: [certificates] Deploying kubernetes certificates to Cluster nodes
2018/05/16 18:30:15 [INFO] cluster [c-5lfbg] provisioning: Successfully Deployed local admin kubeconfig at [management-state/rke/rke-186836371/kube_config_cluster.yml]
2018/05/16 18:30:15 [INFO] cluster [c-5lfbg] provisioning: [certificates] Successfully deployed kubernetes certificates to Cluster nodes
2018/05/16 18:30:15 [INFO] cluster [c-5lfbg] provisioning: Pre-pulling kubernetes images
2018/05/16 18:30:15 [INFO] cluster [c-5lfbg] provisioning: Kubernetes images pulled successfully
2018/05/16 18:30:15 [INFO] cluster [c-5lfbg] provisioning: [etcd] Building up etcd plane..
2018/05/16 18:30:16 [INFO] cluster [c-5lfbg] provisioning: [etcd] Successfully started [rke-log-linker] container on host [172.27.3.28]
2018/05/16 18:30:17 [INFO] cluster [c-5lfbg] provisioning: [remove/rke-log-linker] Successfully removed container on host [172.27.3.28]
2018/05/16 18:30:17 [INFO] cluster [c-5lfbg] provisioning: [etcd] Successfully started etcd plane..
2018/05/16 18:30:17 [INFO] cluster [c-5lfbg] provisioning: [controlplane] Building up Controller Plane..
2018/05/16 18:30:17 [INFO] cluster [c-5lfbg] provisioning: [sidekick] Sidekick container already created on host [172.27.3.28]
2018/05/16 18:30:17 [INFO] cluster [c-5lfbg] provisioning: [healthcheck] Start Healthcheck on service [kube-apiserver] on host [172.27.3.28]
E0516 18:30:51.026295 1 streamwatcher.go:109] Unable to decode an event from the watch stream: json: cannot unmarshal string into Go struct field dynamicEvent.Object of type v3.NodeStatus
2018/05/16 18:31:15 [ERROR] cluster [c-5lfbg] provisioning: [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Service [kube-apiserver] is not healthy on host [172.27.3.28]. Response code: [403], response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"kube-apiserver\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
E0516 18:31:50.302086 1 generic_controller.go:204] CatalogController library [catalog] failed with : Get https://git.rancher.io/charts/index.yaml: dial tcp 35.160.43.145:443: i/o timeout
2018-05-16 18:32:14.776357 I | mvcc: store.index: compact 1488
2018-05-16 18:32:14.778456 I | mvcc: finished scheduled compaction at 1488 (took 1.601105ms)
2018/05/16 18:33:15 [INFO] Provisioning cluster [c-5lfbg]
2018/05/16 18:33:15 [INFO] Creating cluster [c-5lfbg]
2018/05/16 18:33:15 [ERROR] Cluster c-5lfbg previously failed to create
2018/05/16 18:33:15 [INFO] cluster [c-5lfbg] provisioning: Building Kubernetes cluster
2018/05/16 18:33:15 [INFO] cluster [c-5lfbg] provisioning: [dialer] Setup tunnel for host [172.27.3.28]
2018/05/16 18:33:15 [INFO] cluster [c-5lfbg] provisioning: [state] Found local kube config file, trying to get state from cluster
2018/05/16 18:33:15 [INFO] cluster [c-5lfbg] provisioning: [state] Fetching cluster state from Kubernetes
node with worker:
-----BEGIN CERTIFICATE-----
MIIC7jCCAdagAwIBAgIBADANBgkqhkiG9w0BAQsFADAoMRIwEAYDVQQKEwl0aGUt
cmFuY2gxEjAQBgNVBAMTCWNhdHRsZS1jYTAeFw0xODA1MTYxODE3MjlaFw0yODA1
MTMxODE3MjlaMCgxEjAQBgNVBAoTCXRoZS1yYW5jaDESMBAGA1UEAxMJY2F0dGxl
LWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA+ixO4aMzwwCLLTsZ
8oRXX/LJjz/CF1720nbMGlLtHsi0IfflqWj9AFm7WJsgK7GNxZxWnUf0lEOnX6YN
I0pU4hyfzhqNhguxFyRZYZze+y5L9iyNjBWY/m5GbVyeXrhM2pDUqU04zK+kskHX
AhrtpGQCA8uKQh7XDwJmRNxq6/LyJ/y6nqWA370B8DpZnr2HuDY09CHGKYqAs5KC
pztwLs3ZUk8aAOUspoZ82bh/EaQmEWH2mPq71qAAndlLlRbQiryNFuHekWVpy1Lf
xSov5dYFgfoPwipe255zcliQXvMtx71oDu5fZvmpkyT7FIZea35byd6M5mNrW4Nu
kZpdnwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAN
BgkqhkiG9w0BAQsFAAOCAQEA9KKswMG0HYJuB2UXNAK+EaTbJH15EuQmsopdyt1X
iLY3dY58Aas7U5x9jZMJ00+MjD7/fWzmlqQmtO2jAbJkWbWFGuESnxb/zKaQpWYi
agOc8NpYKnUUL46YfKG36Ma4mgNjk3UdiSXPOc40k1UaZROA2bRo59JBuFOhxcww
d/H+4LCI8A6FUAg+DcOrBFbdgZ9sW20o+f+TbbwzfAjd6qyHZMU9jE+3gF37IYA9
y3Na5+FPmcpdlqX9LXiSqZpJFjrajDB5/T1jBAhnxwWbmRotAV7yoDG2I9ng8edD
B45VOpyV51WA8sL+QFWxxQvaHhhGxmHoRpMjANKjp4uK6Q==
-----END CERTIFICATE-----
time="2018-05-16T18:29:28Z" level=info msg="Option requestedHostname=AL-LINUX01090"
time="2018-05-16T18:29:28Z" level=info msg="Option customConfig=map[address:172.27.3.29 internalAddress: roles:[worker]]"
time="2018-05-16T18:29:28Z" level=info msg="Option etcd=false"
time="2018-05-16T18:29:28Z" level=info msg="Option controlPlane=false"
time="2018-05-16T18:29:28Z" level=info msg="Option worker=true"
time="2018-05-16T18:29:28Z" level=info msg="Connecting to wss://172.27.3.27/v3/connect/register with token k72vxhq2vw6bc8qtd46h77m8s7gbc2mn55zgdzgbszstcf854f7x4w"
time="2018-05-16T18:29:28Z" level=info msg="Connecting to proxy" url="wss://172.27.3.27/v3/connect/register"
time="2018-05-16T18:29:28Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:30Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:32Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:34Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:36Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:38Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:40Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:42Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:44Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:46Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:48Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:50Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:52Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:54Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:56Z" level=info msg="waiting for node to register"
time="2018-05-16T18:29:58Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:00Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:02Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:04Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:06Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:08Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:10Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:12Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:14Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:16Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:18Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:20Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:22Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:24Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:26Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:28Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:30Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:32Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:34Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:36Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:38Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:40Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:42Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:44Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:46Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:48Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:50Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:52Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:54Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:56Z" level=info msg="waiting for node to register"
time="2018-05-16T18:30:58Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:00Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:02Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:04Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:06Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:08Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:10Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:12Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:14Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:16Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:18Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:20Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:22Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:24Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:26Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:28Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:30Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:32Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:34Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:36Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:38Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:40Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:42Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:44Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:46Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:48Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:50Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:52Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:54Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:56Z" level=info msg="waiting for node to register"
time="2018-05-16T18:31:58Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:00Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:02Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:04Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:06Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:08Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:10Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:12Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:14Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:16Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:18Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:20Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:22Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:24Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:26Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:28Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:30Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:32Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:34Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:36Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:38Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:40Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:42Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:44Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:46Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:48Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:50Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:52Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:54Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:56Z" level=info msg="waiting for node to register"
time="2018-05-16T18:32:58Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:00Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:02Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:04Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:06Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:08Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:10Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:12Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:14Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:16Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:18Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:20Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:22Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:24Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:26Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:28Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:30Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:32Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:34Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:36Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:38Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:40Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:42Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:44Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:46Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:48Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:50Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:52Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:54Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:56Z" level=info msg="waiting for node to register"
time="2018-05-16T18:33:58Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:00Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:02Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:04Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:06Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:08Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:10Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:12Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:14Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:16Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:18Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:20Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:22Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:24Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:26Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:28Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:30Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:32Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:34Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:36Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:38Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:40Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:42Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:44Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:46Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:48Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:50Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:52Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:54Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:56Z" level=info msg="waiting for node to register"
time="2018-05-16T18:34:58Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:00Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:02Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:04Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:06Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:08Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:10Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:12Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:14Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:16Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:18Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:20Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:22Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:24Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:26Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:28Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:30Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:32Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:34Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:36Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:38Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:40Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:42Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:44Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:46Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:48Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:50Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:52Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:54Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:56Z" level=info msg="waiting for node to register"
time="2018-05-16T18:35:58Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:00Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:02Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:04Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:06Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:08Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:10Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:12Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:14Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:16Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:18Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:20Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:22Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:24Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:26Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:28Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:30Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:32Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:34Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:36Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:38Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:40Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:42Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:44Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:46Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:48Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:50Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:52Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:54Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:56Z" level=info msg="waiting for node to register"
time="2018-05-16T18:36:58Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:00Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:02Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:04Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:06Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:08Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:10Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:12Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:14Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:16Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:18Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:20Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:22Z" level=info msg="waiting for node to register"
time="2018-05-16T18:37:24Z" level=info msg="waiting for node to register"
node with monitor and etcd:
Found container ID: 33bf8bc33ab17328f0bca2f463079f3e2a426884bac7993a1f40d6d14c88a137
Checking root: /host/run/runc
Checking file: 33bf8bc33ab17328f0bca2f463079f3e2a426884bac7993a1f40d6d14c88a137
Found state.json: 33bf8bc33ab17328f0bca2f463079f3e2a426884bac7993a1f40d6d14c88a137
time="2018-05-16T18:26:32Z" level=info msg="Execing [/usr/bin/nsenter --mount=/proc/3846/ns/mnt -F -- /var/lib/docker/devicemapper/mnt/e9b3f3c2a5123578ce75ec664734f336cf07a7b65cb26c78e1315c4db0ae909d/rootfs/usr/bin/share-mnt --stage2 /var/lib/kubelet /var/lib/rancher -- norun]"
-----BEGIN CERTIFICATE-----
MIIC7jCCAdagAwIBAgIBADANBgkqhkiG9w0BAQsFADAoMRIwEAYDVQQKEwl0aGUt
cmFuY2gxEjAQBgNVBAMTCWNhdHRsZS1jYTAeFw0xODA1MTYxODE3MjlaFw0yODA1
MTMxODE3MjlaMCgxEjAQBgNVBAoTCXRoZS1yYW5jaDESMBAGA1UEAxMJY2F0dGxl
LWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA+ixO4aMzwwCLLTsZ
8oRXX/LJjz/CF1720nbMGlLtHsi0IfflqWj9AFm7WJsgK7GNxZxWnUf0lEOnX6YN
I0pU4hyfzhqNhguxFyRZYZze+y5L9iyNjBWY/m5GbVyeXrhM2pDUqU04zK+kskHX
AhrtpGQCA8uKQh7XDwJmRNxq6/LyJ/y6nqWA370B8DpZnr2HuDY09CHGKYqAs5KC
pztwLs3ZUk8aAOUspoZ82bh/EaQmEWH2mPq71qAAndlLlRbQiryNFuHekWVpy1Lf
xSov5dYFgfoPwipe255zcliQXvMtx71oDu5fZvmpkyT7FIZea35byd6M5mNrW4Nu
kZpdnwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAN
BgkqhkiG9w0BAQsFAAOCAQEA9KKswMG0HYJuB2UXNAK+EaTbJH15EuQmsopdyt1X
iLY3dY58Aas7U5x9jZMJ00+MjD7/fWzmlqQmtO2jAbJkWbWFGuESnxb/zKaQpWYi
agOc8NpYKnUUL46YfKG36Ma4mgNjk3UdiSXPOc40k1UaZROA2bRo59JBuFOhxcww
d/H+4LCI8A6FUAg+DcOrBFbdgZ9sW20o+f+TbbwzfAjd6qyHZMU9jE+3gF37IYA9
y3Na5+FPmcpdlqX9LXiSqZpJFjrajDB5/T1jBAhnxwWbmRotAV7yoDG2I9ng8edD
B45VOpyV51WA8sL+QFWxxQvaHhhGxmHoRpMjANKjp4uK6Q==
-----END CERTIFICATE-----
Getting the same error with v2.0.1-rc5 Have not tried master but not sure what the difference is with master and rc5
I was facing the same issue and its all related to security groups. Give a try and allow every protocol on BOTH sides (Racher Server and nodes).
It worked like a charm.
Issue 1:
Nodes created/added cannot reach the configured server-url (usually the IP/name of the host running the rancher/rancher container or LB/proxy), this can be tested by running curl -k https://configured_server-url on the node and see it you get a response, if so, network connectivity is not an issue. This is not something Rancher can automatically configure as you created the node running rancher/rancher yourself, so you need to configure appropriate access inbound from the created node IP/subnet. (HTTPS, TCP/443)
Issue 2:
Certificates are not configured correctly, usually this occurs when a recognized CA certificate needs intermediates added to the certificate to function. It can work well in the browser, but that won't work in the Go agent. Intermediates need to be added in order, your certificate first, followed by intermediates in the chain. This can be checked by the logging of rancher/rancher-agent where it will say x509: certificate signed by unknown authority
Issue 3:
Proxy/loadbalancer does not have the prerequisites listed in the docs, so not support websockets/not passing the correct headers/no HTTP2.
Issue 4:
Nodes are being re-used and aren't cleaned properly before re-using. See https://rancher.com/docs/rancher/v2.x/en/installation/removing-rancher/cleaning-cluster-nodes/#cleaning-a-node-manually how to clean nodes so it won't interfere with adding the node to a new cluster.
I'm having the same issue when trying to create a cluster on AWS
keep getting this error..
018-05-30T20:25:10.563986436Z 2018/05/30 20:25:10 [ERROR] cluster [c-2hdlq] provisioning: [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [xx.xx.xx.xx]: Get https://localhost:6443/healthz: can not build dialer to c-2hdlq:m-mhwmg
Same here on vSphere 6.7. Are there any Layer-2 requirements between the control/etcd server and the rancher server? L3 communication works just fine in my setup but the Rancher server is on a different subnet then the etcd/control and worker machines.
/EDIT in my case this was due to the use of custom certificates and an issue on how the rancher-agent will handle the tailing LF char., see #13831
You can test Issue 3 (WS/HTTP2) from above with this upgrade test:
RANCHER_SERVER=echo.websocket.org
curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: $RANCHER_SERVER" -H "Origin: https://www.websocket.org" https://$RANCHER_SERVER
Set RANCHER_SERVER to the hostname of your rancher server, and run from a node/machine in your other subnet.
Should open a HTTPS connection and upgrade it to a websocket, spit out the handshakes and finally connect and wait for commands.
We encountered this issue as well and it turned out that the deployed nodes (etcds, controllers, workers,..) need to be able to connect back to Rancher's IP.
As this is becoming a collection of different issues, please look at https://github.com/rancher/rancher/issues/12657#issuecomment-391362566.
I just successfully tested our example nginx configuration (https://rancher.com/docs/rancher/v2.x/en/installation/single-node-install-external-lb/#example-nginx-configuration) using docker run -d --name=nginx --restart=unless-stopped -p 80:80 -p 443:443 -v /etc/letsencrypt:/etc/letsencrypt -v /etc/nginx.conf:/etc/nginx/conf.d/default.conf --link=rancher-server nginx:1.14 and it worked using custom cluster and EC2 node driver.
If you experience issues, file a new issue with versions/configs used and we can investigate.
Hi folks.
Getting the same problem with 2.0.2. I have supplied my wildcard domain certs as bind-mount to the rancher server (key.pem. cert.pem and cacerts.pem), but curl https://localhost:6443/healthz says curl: (60) SSL certificate problem: unable to get local issuer certificate
The Rancher's web UI works well, i mean it shows Secure sign and all. Where did I go wrong?
Got the same issue - isn't it a problem with rancher trying to access the health endpoint of the kube-apiserver which has a self signed cert?
The cert of the kube-apiserver looks like this:
I was checking the healthz endpoint manually from the machine rancher is running on.
I am getting an "ok" back when using curl [...] -k -v, but rancher is saying:
Looks like rke or something is checking locally and does not have the ca-cert with which the kubernetes apiserver created it's self signed cert.
Hi Guys
I followed this page https://rancher.com/docs/rancher/v2.x/en/installation/single-node-install/#option-b-bring-your-own-certificate-self-signed.
Rancher is running nicely gives information about custom certs and private Issuer on browser. But when a new cluster is created of Amazon EC2. Clusters are running and I can definitely hit the healthcheck URL for https://13.211.146.121:6443/healthz and it gives 'OK'. But Rancher is unable to verify the healthcheck for KubeAPI server.
Either in Rancher logs I see:
[healthcheck] Start Healthcheck on service [kube-apiserver] on host [13.211.146.121]
2018/08/08 07:14:32 [ERROR] cluster [c-268ds] provisioning: [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [13.211.146.121]: Get https://localhost:6443/healthz: can not build dialer to c-268ds:m-swk45, log: I0808 07:13:51.468200 1 storage_rbac.go:279] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
Or
[healthcheck] Start Healthcheck on service [kube-apiserver] on host [13.211.146.121]
2018/08/08 07:27:35 [ERROR] cluster [c-268ds] provisioning: [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [13.211.146.121]: Get https://localhost:6443/healthz: can not build dialer to c-268ds:m-swk45, log: I0808 07:26:32.685123 1 logs.go:49] http: TLS handshake error from 60.242.93.44:50822: EOF
This happens continuously in a loop.
There are no dramas when rancher is started in this fashion:
https://rancher.com/docs/rancher/v2.x/en/installation/single-node-install/#option-a-default-self-signed-certificate
K8s Clusters on EC2 are visible and nicely managed by Rancher.
Have I missed anything with custom bring your own self signed certs ?
docker run -d -p 80:80 -p 443:443 -v /etc/rancher/cert.pem:/etc/rancher/ssl/cert.pem -v /etc/rancher/key.pem:/etc/rancher/ssl/key.pem -v /etc/rancher/cacerts.pem:/etc/rancher/ssl/cacerts.pem -v /home/ubuntu/rancher/data:/var/lib/rancher --restart=unless-stopped --name rancher rancher/rancher
@govinda-attal hmm, I didn't even change anything and it doesn't work, bringing your own cert or simply not changing anything didn't help.
@dgabrysch - sorry if there was any confusion. I bumped into similar problem as you did and nothing has helped us too.
@govinda-attal
I am having the same problem. Did you resolve this or did you find a workaround?
This is preventing me from getting started with Rancher.
@twillert
Bring my own certs didn't help. In my case I have certs that are issued by another CA but not a general recognised one.
But when I install rancher with certs that are generated by default (none explicitly provided) the clusters start nicely and the rancher is able to connect and manage the cluster well.
So to get started, i think you may choose to install (with no certs explicitly passed when you run docker container) and it will work fine for you. But you may not like this solution as these certs are self-signed certs generated by rancher.
In my case, I managed to start Rancher nodes by not supplying my cacert.pem. Instead, I've added CA chain to main cert.pem file. Also, consider empty lines, they shouldn't be present in both key and cert files.
Something like that:
$ docker run -d -p 80:80 -p 443:443 -v /root/cert.pem:/etc/rancher/ssl/cert.pem -v /root/key.pem:/etc/rancher/ssl/key.pem rancher/rancher:stable
If there are no issue with the default certificates, usually the intermediates are missing from the configured certificate. All intermediates are needed when providing the certificate for the agent to validate it, checking the browser is not a good test as it will have the certificates built in and will validate the certificate by completing the chain on it's own.
The comment mentioned already provides steps to diagnose, showing the result/output of the steps helps in diagnosing the issue. (https://github.com/rancher/rancher/issues/12657#issuecomment-391362566)
I also created this script to perform some basic checks to begin with: https://gist.github.com/superseb/bcb10d1c7e222d871321aa80c181abf6
Done that (replaced company stuff with "internal" and "our.domain.de" a.s.o.
DNS for rancher-dev.our.domain.de is rancher-servername.our.domain.de.
xx.xx.xx.xx
depth=1 DC = internal, DC = internal, CN = internal Root CA
verify return:1
depth=0 C = DE, ST = internal, L = internal, O = internal, OU = internal, CN = *.our.domain.de
verify return:1
DONE
Certificate chain is complete
So the certificates look good, is the node that is being added able to reach the configured server-url?
What is in the logging of the rancher/rancher-agent container on the node that is trying to be added?
OK, I think I found the issue - there is a timeout occuring when trying to reach the outside chart catalog.
I clicked on this (Disabled) and it's working......:

There seems to be some timing issue when provisioning and checking the external chart repo
@superseb
Thank you for help. But it didn't help.
I ran your script
./check-rancher.sh https://rancher.playground.everledger.io
DNS for rancher.playground.everledger.io is 54.79.56.90
CA checksum from https://rancher.playground.everledger.io/v3/settings/cacerts is e7be209264c40640dedfcbefb60fa25df66cdcaf75c1562ac8a103a46723607e
depth=2 C = UK, ST = London, L = London, O = Everledger Inc., OU = Everledger Root CA, CN = Everledger Root CA, emailAddress = [email protected]
verify return:1
depth=1 C = GB, ST = London, L = London, O = Everledger Inc., OU = Operations, CN = ops.everledger.io
verify return:1
depth=0 CN = rancher.playground.everledger.io
verify return:1
DONE
Certificate chain is complete
Attaching logs from rancher-agent (master node) & cacerts as url https://rancher.playground.everledger.io/v3/settings/cacerts
@superseb (I am same person as @govinda-attal - sorry with different handle) I continue to get the error on rancher server as
2018/08/08 19:18:58 [INFO] cluster [c-78qrx] provisioning: [sidekick] Sidekick container already created on host [13.211.207.225]
2018/08/08 19:18:58 [INFO] cluster [c-78qrx] provisioning: [healthcheck] Start Healthcheck on service [kube-apiserver] on host [13.211.207.225]
2018/08/08 19:19:41 [INFO] 2018/08/08 19:19:41 http: TLS handshake error from 54.234.37.47:63666: remote error: tls: unknown certificate authority
2018/08/08 19:19:41 [INFO] 2018/08/08 19:19:41 http: TLS handshake error from 54.92.200.197:32504: remote error: tls: unknown certificate authority
2018/08/08 19:19:42 [INFO] 2018/08/08 19:19:42 http: TLS handshake error from 35.153.231.18:50498: remote error: tls: unknown certificate authority
2018/08/08 19:19:43 [INFO] 2018/08/08 19:19:43 http: TLS handshake error from 54.164.194.74:32654: remote error: tls: unknown certificate authority
2018/08/08 19:19:48 [ERROR] cluster [c-78qrx] provisioning: [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [13.211.207.225]: Get https://localhost:6443/healthz: can not build dialer to c-78qrx:m-cvd9n, log: I0808 19:12:08.409311 1 storage_rbac.go:279] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
2018/08/08 19:19:54 [INFO] 2018/08/08 19:19:54 http: TLS handshake error from 106.11.222.108:16816: remote error: tls: unknown certificate
Have attached the rancher-agent.log above. The Certificate chain is complete too.
Rancher server is ran with command as
docker run -d -p 80:80 -p 443:443 \
-v /home/ubuntu/rancher/certs/combined.pem:/etc/rancher/ssl/cert.pem \
-v /etc/rancher/cacerts.pem:/etc/rancher/ssl/cacerts.pem \
-v /home/ubuntu/rancher/certs/key.pem:/etc/rancher/ssl/key.pem \
-v /home/ubuntu/rancher/data:/var/lib/rancher \
--restart=unless-stopped --name rancher rancher/rancher
You have a \r before every \n, can you remove that, and start a container with the correct certificate file and try again? I'm working to make this go away automatically but for now the certificate file has to be correct.
@superseb , I think I learned a lesson here. Sorry, it was my bad on how I used the custom certs that were provided to me.
Had to correct certs and configuration as below:
The command that worked for me even now uses custom CA as per page
https://rancher.com/docs/rancher/v2.x/en/installation/custom-ca-root-certificate/
docker run -d -p 80:80 -p 443:443 \
-v /etc/rancher/combined.pem:/etc/rancher/ssl/cert.pem \
-v /etc/rancher/cacerts/EverledgerRootCA.pem:/etc/rancher/ssl/cacerts.pem \
-v /etc/rancher/key.pem:/etc/rancher/ssl/key.pem \
-v /home/ubuntu/rancher/data:/var/lib/rancher \
-v /etc/rancher/cacerts:/container/certs -e SSL_CERT_DIR="/container/certs" \
--restart=unless-stopped --name rancher rancher/rancher
Once I did that, it worked as a charm! Many thanks @superseb
i'm trying to configure kubernetes cluster using rke. I have one node for master and another for worker. I'm running rke from my local mahcine and it has access to nodes using ssh keys. I was able to configure master using rke up with basic cluster.yml, but it fails on worker with below error:
FATA[0204] [workerPlane] Failed to bring up Worker Plane: [Failed to verify healthcheck: Failed to check https://localhost:10250/healthz for service [kubelet] on host [XXXX.XXXX]: Get https://localhost:10250/healthz: Unable to access the service on localhost:10250. The service might be still starting up. Error: ssh: rejected: connect failed (Connection refused), log: + umount /var/lib/docker/overlay2/c8e1d2b4db595eae229930fc8f3ffe0df826627fb9a3f95d2198c3a3a8b/merged/host/usr/local/doc]
My cluster.yml is as below:
nodes:
- address: master.my-domain.com
user: my-user
role:
- controlplane
- etcd
- address: worker.my-domain.com
user: my-user
role:
- worker
I just did rke up on above file. The rke is latest 0.2.2 version.
Observations:
Resolution: As per the advice from David noland in rancher slack, i tried docker restart and it didn't help. Then i restarted my host and it worked like charm.
I'm able to see the nodes registered as workers when i do kubectl get nodes. Hope this helps someone.
Cool, for me, as superseb stated the following did the job:
Most helpful comment
I am also getting the same issue with the GA release of v2.0.0
Any resolution to this, had a previous setup before in rc1 which did work.