Environment:
Minikube version: minikube version: v0.23.0
Ubuntu 14.04.4 LTSnoneWhat happened:
I'm not able to run my cluster and getting this error: Error x509: certificate is valid for 127.0.0.1, 10.0.0.1, not 10.96.0.1
Details:
I'm trying to run mimikube with no vm minikube start --vm-driver none,
then tried to check the pods:
$ kubectl -n kube-system get po
NAME READY STATUS RESTARTS AGE
default-http-backend-gxgpr 1/1 Running 0 9m
kube-addon-manager-ip-10-0-0-42 1/1 Running 9 9m
kube-dns-86f6f55dd5-pr2wk 2/3 CrashLoopBackOff 9 9m
kubernetes-dashboard-hk8gv 0/1 CrashLoopBackOff 6 9m
nginx-ingress-controller-qc7k9 0/1 CrashLoopBackOff 6 9m
registry-creds-6z7w6 0/1 ContainerCreating 0 9m
With ingress pod log, I'm getting this error:
$ kubectl -n kube-system logs -f nginx-ingress-controller-w6nb7
I0103 13:49:52.496791 7 launch.go:113] &{NGINX 0.9.0-beta.15 git-a3e86f2 https://github.com/kubernetes/ingress}
I0103 13:49:52.496820 7 launch.go:116] Watching for ingress class: nginx
I0103 13:49:52.496991 7 launch.go:291] Creating API client for https://10.96.0.1:443
F0103 13:49:52.502849 7 launch.go:318] Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration). Reason: Get https://10.96.0.1:443/version:x509: certificate is valid for 127.0.0.1, 10.0.0.1, not 10.96.0.1
What you expected to happen:
no craches on pods
How to reproduce it (as minimally and precisely as possible):
minikube start --vm-driver none
kubectl -n kube-system get po
Output of minikube logs (if applicable):
0103 14:11:27.404523 7230 kuberuntime_manager.go:499] Container {Name:kubernetes-dashboard Image:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.0 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-sxskw ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
I0103 14:11:27.404666 7230 kuberuntime_manager.go:738] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-hk8gv_kube-system(ee35b9f7-f08a-11e7-bb3f-023a7998d6c9)"
I0103 14:11:27.404782 7230 kuberuntime_manager.go:748] Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-hk8gv_kube-system(ee35b9f7-f08a-11e7-bb3f-023a7998d6c9)
E0103 14:11:27.404816 7230 pod_workers.go:182] Error syncing pod ee35b9f7-f08a-11e7-bb3f-023a7998d6c9 ("kubernetes-dashboard-hk8gv_kube-system(ee35b9f7-f08a-11e7-bb3f-023a7998d6c9)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-hk8gv_kube-system(ee35b9f7-f08a-11e7-bb3f-023a7998d6c9)"
2018-01-03 14:11:27.653092 I | http: TLS handshake error from 172.17.0.4:51947: remote error: tls: bad certificate
2018-01-03 14:11:27.654915 I | http: TLS handshake error from 172.17.0.4:51948: remote error: tls: bad certificate
2018-01-03 14:11:27.655849 I | http: TLS handshake error from 172.17.0.4:51949: remote error: tls: bad certificate
I0103 14:11:28.401750 7230 kuberuntime_manager.go:499] Container {Name:nginx-ingress-controller Image:gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 Command:[] Args:[/nginx-ingress-controller --default-backend-service=$(POD_NAMESPACE)/default-http-backend --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf] WorkingDir: Ports:[{Name: HostPort:80 ContainerPort:80 Protocol:TCP HostIP:} {Name: HostPort:443 ContainerPort:443 Protocol:TCP HostIP:} {Name: HostPort:18080 ContainerPort:18080 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POD_NAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {Name:POD_NAMESPACE Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-sxskw ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10254,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10254,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
I0103 14:11:28.401878 7230 kuberuntime_manager.go:738] checking backoff for container "nginx-ingress-controller" in pod "nginx-ingress-controller-w6nb7_kube-system(d2853f3f-f08c-11e7-ad0a-023a7998d6c9)"
I0103 14:11:28.402016 7230 kuberuntime_manager.go:748] Back-off 40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-w6nb7_kube-system(d2853f3f-f08c-11e7-ad0a-023a7998d6c9)
E0103 14:11:28.402048 7230 pod_workers.go:182] Error syncing pod d2853f3f-f08c-11e7-ad0a-023a7998d6c9 ("nginx-ingress-controller-w6nb7_kube-system(d2853f3f-f08c-11e7-ad0a-023a7998d6c9)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-w6nb7_kube-system(d2853f3f-f08c-11e7-ad0a-023a7998d6c9)"
2018-01-03 14:11:28.657785 I | http: TLS handshake error from 172.17.0.4:51950: remote error: tls: bad certificate
2018-01-03 14:11:28.661267 I | http: TLS handshake error from 172.17.0.4:51952: remote error: tls: bad certificate
2018-01-03 14:11:28.661422 I | http: TLS handshake error from 172.17.0.4:51951: remote error: tls: bad certificate
2018-01-03 14:11:29.662308 I | http: TLS handshake error from 172.17.0.4:51953: remote error: tls: bad certificate
2018-01-03 14:11:29.665917 I | http: TLS handshake error from 172.17.0.4:51954: remote error: tls: bad certificate
2018-01-03 14:11:29.666130 I | http: TLS handshake error from 172.17.0.4:51955: remote error: tls: bad certificate
2018-01-03 14:11:30.666984 I | http: TLS handshake error from 172.17.0.4:51956: remote error: tls: bad certificate
2018-01-03 14:11:30.670232 I | http: TLS handshake error from 172.17.0.4:51957: remote error: tls: bad certificate
2018-01-03 14:11:30.670573 I | http: TLS handshake error from 172.17.0.4:51958: remote error: tls: bad certificate
2018-01-03 14:11:31.671635 I | http: TLS handshake error from 172.17.0.4:51959: remote error: tls: bad certificate
2018-01-03 14:11:31.674642 I | http: TLS handshake error from 172.17.0.4:51960: remote error: tls: bad certificate
2018-01-03 14:11:31.675109 I | http: TLS handshake error from 172.17.0.4:51961: remote error: tls: bad certificate
2018-01-03 14:11:32.676284 I | http: TLS handshake error from 172.17.0.4:51963: remote error: tls: bad certificate
2018-01-03 14:11:32.679345 I | http: TLS handshake error from 172.17.0.4:51964: remote error: tls: bad certificate
2018-01-03 14:11:32.680261 I | http: TLS handshake error from 172.17.0.4:51965: remote error: tls: bad certificate
2018-01-03 14:11:33.681091 I | http: TLS handshake error from 172.17.0.4:51966: remote error: tls: bad certificate
2018-01-03 14:11:33.683698 I | http: TLS handshake error from 172.17.0.4:51967: remote error: tls: bad certificate
2018-01-03 14:11:33.684545 I | http: TLS handshake error from 172.17.0.4:51968: remote error: tls: bad certificate
2018-01-03 14:11:34.685672 I | http: TLS handshake error from 172.17.0.4:51969: remote error: tls: bad certificate
2018-01-03 14:11:34.688031 I | http: TLS handshake error from 172.17.0.4:51970: remote error: tls: bad certificate
2018-01-03 14:11:34.689172 I | http: TLS handshake error from 172.17.0.4:51971: remote error: tls: bad certificate
2018-01-03 14:11:35.690166 I | http: TLS handshake error from 172.17.0.4:51973: remote error: tls: bad certificate
2018-01-03 14:11:35.693743 I | http: TLS handshake error from 172.17.0.4:51974: remote error: tls: bad certificate
2018-01-03 14:11:35.693786 I | http: TLS handshake error from 172.17.0.4:51975: remote error: tls: bad certificate
2018-01-03 14:11:36.695450 I | http: TLS handshake error from 172.17.0.4:51979: remote error: tls: bad certificate
2018-01-03 14:11:36.698975 I | http: TLS handshake error from 172.17.0.4:51980: remote error: tls: bad certificate
2018-01-03 14:11:36.699156 I | http: TLS handshake error from 172.17.0.4:51981: remote error: tls: bad certificate
2018-01-03 14:11:37.699919 I | http: TLS handshake error from 172.17.0.4:51982: remote error: tls: bad certificate
2018-01-03 14:11:37.703338 I | http: TLS handshake error from 172.17.0.4:51983: remote error: tls: bad certificate
2018-01-03 14:11:37.703371 I | http: TLS handshake error from 172.17.0.4:51984: remote error: tls: bad certificate
2018-01-03 14:11:38.704575 I | http: TLS handshake error from 172.17.0.4:51985: remote error: tls: bad certificate
2018-01-03 14:11:38.707454 I | http: TLS handshake error from 172.17.0.4:51986: remote error: tls: bad certificate
2018-01-03 14:11:38.707656 I | http: TLS handshake error from 172.17.0.4:51987: remote error: tls: bad certificate
==> /var/lib/localkube/localkube.out <==
localkube host ip address: 10.0.0.42
Starting apiserver...
Waiting for apiserver to be healthy...
apiserver is ready!
Starting controller-manager...
Waiting for controller-manager to be healthy...
controller-manager is ready!
Starting scheduler...
Waiting for scheduler to be healthy...
scheduler is ready!
Starting kubelet...
Waiting for kubelet to be healthy...
kubelet is ready!
Starting proxy...
Waiting for proxy to be healthy...
proxy is ready!
Anything else do we need to know:
10.96.0.1 is the kubernetes service ClusterIP$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m
You should upgrade to the latest version of minikube, 0.24.1
Hi @r2d4, I did the upgrade and worked for me, thanks!
I'm using latest version 0.35.0 and get the same error whenever I connect to a different network (work / home) even after fixing the context with: minikube update-context
UPDATE: found a solution: kubeadm init phase certs all
minikube stop && minikube delete && minikube start
solved the issue
Most helpful comment
minikube stop && minikube delete && minikube start
solved the issue