Is this a BUG REPORT or FEATURE REQUEST? (choose one): Maybe both? Either a documentation bug or a feature request to handle restrictive firewalls?
Please provide the following details:
Environment:
minikube version: v0.28.2
OS:
PRETTY_NAME="Debian GNU/Linux buster/sid"
NAME="Debian GNU/Linux"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
VM driver:
"DriverName": "virtualbox",
ISO version
"Boot2DockerURL": "file:///home/brian/.minikube/cache/iso/minikube-v0.28.1.iso",
What happened: When running minikube start, minikube does a bunch of stuff, and then fails with:
Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
or (on subsequent attempts to start minikube):
error getting Pods with label selector "k8s-app=kube-proxy" [Get https://192.168.99.100:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy: dial tcp 192.168.99.100:8443: i/o timeout]
What you expected to happen: minikube would successfully set up and start a cluster
How to reproduce it (as minimally and precisely as possible):
Set up some fairly aggressive iptables rules. The key bit is that the INPUT chain should default to DROP (IMO a reasonable thing to do in the name of security), but specific rules in the chain should still allow normal traffic to work. Something like this will probably do the trick (haven't tested, but I think this is a 'safe' subset of my rules):
iptables -P INPUT DROP
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i $NAME_OF_YOUR_NETWORK_INTERFACE -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -m pkttype --pkt-type multicast -j ACCEPT
iptables -A INPUT -m pkttype --pkt-type broadcast -j ACCEPT
Output of minikube logs (if applicable):
$ minikube start --vm-driver virtualbox --logtostderr --loglevel 0
W0822 17:53:36.374904 21879 root.go:148] Error reading config file at /home/brian/.minikube/config/config.json: open /home/brian/.minikube/config/config.json: no such file or directory
I0822 17:53:36.375018 21879 notify.go:121] Checking for updates...
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
I0822 17:53:36.507424 21879 cluster.go:69] Machine does not exist... provisioning new machine
I0822 17:53:36.507437 21879 cluster.go:70] Provisioning machine with config: {MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v0.28.1.iso Memory:2048 CPUs:2 DiskSize:20000 VMDriver:virtualbox HyperkitVpnKitSock: HyperkitVSockPorts:[] XhyveDiskDriver:ahci-hd DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KvmNetwork:default Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: GPU:false}
I0822 17:53:36.507512 21879 downloader.go:56] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v0.28.1.iso
I0822 17:54:34.943926 21879 ssh_runner.go:57] Run: sudo rm -f /etc/docker/ca.pem
I0822 17:54:35.058237 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/docker
I0822 17:54:35.130934 21879 ssh_runner.go:57] Run: sudo rm -f /etc/docker/server.pem
I0822 17:54:35.183997 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/docker
I0822 17:54:35.260552 21879 ssh_runner.go:57] Run: sudo rm -f /etc/docker/server-key.pem
I0822 17:54:35.315520 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/docker
Getting VM IP address...
Moving files into cluster...
I0822 17:54:37.909026 21879 kubeadm.go:208] Container runtime flag provided with no value, using defaults.
I0822 17:54:37.909107 21879 ssh_runner.go:57] Run: sudo rm -f /usr/bin/kubeadm
I0822 17:54:37.909133 21879 ssh_runner.go:57] Run: sudo rm -f /usr/bin/kubelet
I0822 17:54:38.007898 21879 ssh_runner.go:57] Run: sudo mkdir -p /usr/bin
I0822 17:54:38.007898 21879 ssh_runner.go:57] Run: sudo mkdir -p /usr/bin
I0822 17:54:41.575516 21879 ssh_runner.go:57] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0822 17:54:41.623957 21879 ssh_runner.go:57] Run: sudo mkdir -p /lib/systemd/system
I0822 17:54:41.714413 21879 ssh_runner.go:57] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0822 17:54:41.775677 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d
I0822 17:54:41.857335 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/kubeadm.yaml
I0822 17:54:41.923684 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib
I0822 17:54:41.995464 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0822 17:54:42.043847 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0822 17:54:42.060140 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0822 17:54:42.107383 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0822 17:54:42.159549 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/manifests/addon-manager.yaml
I0822 17:54:42.208005 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/manifests/
I0822 17:54:42.286221 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/dashboard-dp.yaml
I0822 17:54:42.343484 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0822 17:54:42.463757 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/dashboard-svc.yaml
I0822 17:54:42.523774 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0822 17:54:42.655665 21879 ssh_runner.go:57] Run:
sudo systemctl daemon-reload &&
sudo systemctl enable kubelet &&
sudo systemctl start kubelet
Setting up certs...
I0822 17:54:42.883379 21879 certs.go:47] Setting up certificates for IP: 192.168.99.100
I0822 17:54:42.894847 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/ca.crt
I0822 17:54:42.943360 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:42.997602 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/ca.key
I0822 17:54:43.047369 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.100806 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/apiserver.crt
I0822 17:54:43.147333 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.199497 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/apiserver.key
I0822 17:54:43.243371 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.299688 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client-ca.crt
I0822 17:54:43.351535 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.471891 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client-ca.key
I0822 17:54:43.539445 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.594391 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client.crt
I0822 17:54:43.647541 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.732779 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client.key
I0822 17:54:43.783377 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.837781 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/kubeconfig
I0822 17:54:43.887435 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube
Connecting to cluster...
Setting up kubeconfig...
I0822 17:54:44.135433 21879 config.go:101] Using kubeconfig: /home/brian/.kube/config
Starting cluster components...
I0822 17:54:44.136803 21879 ssh_runner.go:80] Run with output:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
E0822 17:57:48.615897 21879 start.go:300] Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
Anything else do we need to know:
Resetting the default policy for the INPUT chain to ACCEPT (iptables -P INPUT ACCEPT) causes the problem to go away and the cluster to start successfully.
I'm not sure how feasible this is, but it would be great if minikube could add the required specific ACCEPT rules to the INPUTchain to allow all this to work with a default policy of DROP. At the very least, updating the Linux-specific docs to call out this issue and the workaround would be great.
I think I could also make my rules a little more generic, by not specifically allowing only traffic from my WiFi interface, but it'd be great to not have to do this.
+1 to your idea of documenting what kind of iptables configuration is required to make this work.
This error is reported in multiple gh issues. I am not sure if the solution I posted in 3022 is valid for your particular issue but I'll leave it here for your review.
https://github.com/kubernetes/minikube/issues/3022#issuecomment-424145410
I saw the same error when starting minikube v0.30.0 (that runs k8s v1.10.0) on MacOS Mojave 10.14.1 with VirtualBox 5.2.22:
$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E1201 00:55:03.176754 56344 start.go:297] Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
Per @iuliancorcoja's comment in https://github.com/kubernetes/minikube/issues/3022#issuecomment-425495721, deleting vboxnet0 in the VirtualBox GUI under Global Tools --> Host Network Manager resolved this issue for me.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
This is now documented at https://github.com/kubernetes/minikube/blob/master/docs/networking.md