Minikube: Network connectivity when using Hyperkit and Cisco VPN

Created on 15 Dec 2017  路  7Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Please provide the following details:

Environment:

Minikube version: v0.24.1

  • OS: OSX 10.12.6
  • VM Driver: hyperkit
  • ISO version: minikube-v0.23.6.iso
  • Install tools:
  • Others:

What happened:

  • Started minikube with minikube start --vm-driver hyperkit
  • VM Started successfully
  • Tried running the echoserver image : kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
  • Resulted in the pod having a status of ErrImagePull

What you expected to happen:

  • Expected that the echoserver image to be pulled successfully and the pod started.

How to reproduce it (as minimally and precisely as possible):

  • minikube start --vm-driver hyperkit
  • kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080

Output of minikube logs (if applicable):

Dec 15 15:09:30 minikube localkube[3125]: I1215 15:09:30.392650    3125 kuberuntime_manager.go:499] Container {Name:kubernetes-dashboard Image:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.0 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-w69xz ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Dec 15 15:09:30 minikube localkube[3125]: E1215 15:09:30.395437    3125 pod_workers.go:182] Error syncing pod 7cddda3c-e1a6-11e7-ad06-76a6e6cee7d0 ("kubernetes-dashboard-pmj9n_kube-system(7cddda3c-e1a6-11e7-ad06-76a6e6cee7d0)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.0\""
Dec 15 15:09:34 minikube localkube[3125]: I1215 15:09:34.391075    3125 kuberuntime_manager.go:499] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-w69xz ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Dec 15 15:09:34 minikube localkube[3125]: E1215 15:09:34.393251    3125 pod_workers.go:182] Error syncing pod 7c88b670-e1a6-11e7-ad06-76a6e6cee7d0 ("storage-provisioner_kube-system(7c88b670-e1a6-11e7-ad06-76a6e6cee7d0)"), skipping: failed to "StartContainer" for "storage-provisioner" with ImagePullBackOff: "Back-off pulling image \"gcr.io/k8s-minikube/storage-provisioner:v1.8.1\""

Output of kubectl describe pod xxxx:

Events:
  Type     Reason                 Age               From               Message
  ----     ------                 ----              ----               -------
  Normal   Scheduled              7m                default-scheduler  Successfully assigned hello-minikube-57889c865c-xmcwk to minikube
  Normal   SuccessfulMountVolume  7m                kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-9rhlh"
  Normal   Pulling                5m (x4 over 7m)   kubelet, minikube  pulling image "gcr.io/google_containers/echoserver:1.4"
  Warning  Failed                 4m (x4 over 6m)   kubelet, minikube  Failed to pull image "gcr.io/google_containers/echoserver:1.4": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Normal   BackOff                4m (x6 over 6m)   kubelet, minikube  Back-off pulling image "gcr.io/google_containers/echoserver:1.4"
  Warning  FailedSync             2m (x19 over 6m)  kubelet, minikube  Error syncing pod

Anything else do we need to know:

Due to the discovery that there was a failure to pull the image, I tested the connectivity from within the minikube VM using the following:

  • minikube ssh
  • ping -c 5 google.com

Output:

$ ping -c 5 google.com
PING google.com (172.217.6.46): 56 data bytes

--- google.com ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
chyperkit lifecyclrotten

Most helpful comment

I was facing for the same issue. this basically render minikube useless on OSX for me.

On a side node, on windows 10 1709 update, Hyper-V comes with a build in "Default Switch". If use it in minikube start, then everything works in minikube. all Host VPN (I have 2, Cisco Anyconnect and ArraySSL VPN) works inside minikube.

I came across this solution on OSX, but haven't personally try it yet.

All 7 comments

are you on some kind of VPN connection? My cisco annyconnect will cause the whole minikube host lost external network connection outside the Node or cluster.

Thanks for the pointer @ckuai

I stopped my VPN software, Cisco AnyConnect, removed the old minikube VM and folder:

  • minikube delete
  • rm -rf ~/.minikube

I started a new instance and was able to run the echoserver image.

  • minikube start --vm-driver hyperkit

minikube SSH output:

minikube ssh

                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ ping -c 5 google.com
PING google.com (64.233.185.101): 56 data bytes
64 bytes from 64.233.185.101: seq=0 ttl=38 time=143.656 ms
64 bytes from 64.233.185.101: seq=1 ttl=38 time=65.516 ms
64 bytes from 64.233.185.101: seq=2 ttl=38 time=122.702 ms
64 bytes from 64.233.185.101: seq=3 ttl=38 time=40.633 ms
64 bytes from 64.233.185.101: seq=4 ttl=38 time=44.363 ms

--- google.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 40.633/83.374/143.656 ms

Echoserver output:

kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080

kubectl get pods

NAME                              READY     STATUS    RESTARTS   AGE
hello-minikube-57889c865c-lf6jv   1/1       Running   0          17s

Since docker-for-mac (which uses hyperkit) can connect while my VPN software is running, is there a setting I need to include so that minikube can do the same?

I changed the title of the issue to reflect my findings in the previous comment.

I was facing for the same issue. this basically render minikube useless on OSX for me.

On a side node, on windows 10 1709 update, Hyper-V comes with a build in "Default Switch". If use it in minikube start, then everything works in minikube. all Host VPN (I have 2, Cisco Anyconnect and ArraySSL VPN) works inside minikube.

I came across this solution on OSX, but haven't personally try it yet.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings