Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Environment:
Environment variables:
Minikube version (use minikube version): minikube version: v0.22.2
cat ~/.minikube/machines/minikube/config.json | grep DriverName): nonecat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): [none?]What happened:
After starting minikube (sudo -E minikube start --memory 8000 --cpus 2 --vm-driver=none)
kube-dns fails to start
kube-system po/kube-dns-910330662-qb464 1/3 CrashLoopBackOff 12 15m
What you expected to happen:
kube-dns starts
**Output of kubectl logs kube-dns-910330662-qb464 --namespace=kube-system -c kubedns
I1001 14:32:09.527073 141 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
E1001 14:32:34.027299 141 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.ConfigMap: Get https://10.0.0.1:443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-dns&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
I1001 14:32:34.527053 141 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I1001 14:32:35.027022 141 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
E1001 14:32:35.031767 141 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.0.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
E1001 14:32:35.031827 141 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.0.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
Anything else do we need to know:
**Output of kubectl get all --all-namespaces
kubectl get all --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deploy/kube-dns 1 1 1 0 15m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system rs/kube-dns-910330662 1 1 0 15m
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deploy/kube-dns 1 1 1 0 15m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system po/kube-addon-manager-ip-172-31-43-108 1/1 Running 6 15m
kube-system po/kube-dns-910330662-qb464 1/3 CrashLoopBackOff 12 15m
kube-system po/kubernetes-dashboard-qmgwx 0/1 CrashLoopBackOff 7 15m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system rc/kubernetes-dashboard 1 1 0 15m
The same problem exists locally on my Mac. Same minikube version.
@gregd72002 not sure but could this be the same issue as https://github.com/kubernetes/minikube/issues/2027?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Problem still persist.
minikube version: 0.24.0
Kubernetes version: Client: 1.9.3, Server: 1.8.0
OS: Ubuntu 17.10
VM-Driver: none
Find minikube logs here.
/remove-lifecycle rotten
I have same problem
I run minikube start with this command minikube start --extra-config=apiserver.Authorization.Mode=RBAC
kube-system kube-addon-manager-minikube 1/1 Running 0 4m
kube-system kube-dns-54cccfbdf8-m7wdr 2/3 CrashLoopBackOff 5 4m
kube-system kubernetes-dashboard-77d8b98585-bf6tw 0/1 CrashLoopBackOff 5 4m
kube-system storage-provisioner 1/1 Running 0 4m
minikube version : 0.25.2
Kubernetes version: 1.9.4
OS: Mac OS X High Sierra v10.13.3
The very same thing happens to me. I have a slightly more recent EC2 image:
Linux ip-172-31-34-27 4.9.77-31.58.amzn1.x86_64 #1 SMP Thu Jan 18 22:15:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Both the dashboard pod and the storage-provider also fail as a consequence (they get to running but eventually crash).
Dashboard:
2018/04/04 22:59:01 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
Storage provisioner:
F0404 22:59:11.144046 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Any ideas? @gregd72002 have you figured out the problem?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
We had success when disabling selinux.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
The same problem exists locally on my Mac. Same minikube version.