Minikube: Can't expose services.

Created on 19 Mar 2018  路  5Comments  路  Source: kubernetes/minikube

I'm running minikube on Windows 10 behind a corporate proxy.

I'm able to create deployments fine, but I can't expose them.

For example:

$ minikube delete
Deleting local Kubernetes cluster...
Machine deleted.

$ minikube version
minikube version: v0.25.0

$ minikube start --vm-driver virtualbox --docker-env http_proxy=$http_proxy --docker-env HTTP_PROXY=$http_proxy --docke
r-env https_proxy=$http_proxy --docker-env HTTPS_PROXY=$http_proxy
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

$ export no_proxy=$no_proxy,$(minikube ip)
$ export NO_PROXY=$no_proxy

$ minikube service list
|-----------|------------|--------------|
| NAMESPACE |    NAME    |     URL      |
|-----------|------------|--------------|
| default   | kubernetes | No node port |
|-----------|------------|--------------|


$ eval $(minikube docker-env)

$ kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2m


$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
deployment "hello-minikube" created

$ kubectl logs hello-minikube-64698d6ccf-hxx5j
Error from server (BadRequest): container "hello-minikube" in pod "hello-minikube-64698d6ccf-hxx5j" is waiting to start: ContainerCreating

## Important. Corporate proxy is on the same net as the default docker0, (ie. 172.17.x.x), so I change docker0 to 172.18.x.x. 

$ minikube ssh
$ sudo ifconfig docker0 172.18.0.1 netmask 255.255.0.0
$exit

$ kubectl get pods
NAME                              READY     STATUS    RESTARTS   AGE
hello-minikube-64698d6ccf-hxx5j   1/1       Running   0          4m

$ kubectl get deploy
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-minikube   1         1         1            1           6m

$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed

$ minikube service list
|-------------|----------------------|-----------------------------|
|  NAMESPACE  |         NAME         |             URL             |
|-------------|----------------------|-----------------------------|
| default     | hello-minikube       | http://192.168.99.114:30623 |
| default     | kubernetes           | No node port                |
| kube-system | kube-dns             | No node port                |
| kube-system | kubernetes-dashboard | http://192.168.99.114:30000 |
|-------------|----------------------|-----------------------------|


$ curl -v http://192.168.99.114:30623
* Rebuilt URL to: http://192.168.99.114:30623/
* timeout on name lookup is not supported
*   Trying 192.168.99.114...
* TCP_NODELAY set
* connect to 192.168.99.114 port 30623 failed: Timed out
* Failed to connect to 192.168.99.114 port 30623: Timed out
* Closing connection 0
curl: (7) Failed to connect to 192.168.99.114 port 30623: Timed out

Any suggestions?

Most helpful comment

I am stuck in exposing service. let's take example of your service.

minikube returned http://192.168.99.114:30623 url of your service after exposing as NodePort.

On Internet, I see everyone getting 127.0.0.1: after exposing. Is there any problem in new version of minikube or it is on purpose. What should I do to expose my service to be accessed by external network

All 5 comments

Oh yeah, I'm getting a bunch of crap in minikube logs

Mar 19 04:09:39 minikube localkube[3059]: I0319 04:09:39.541409    3059 kuberuntime_manager.go:514] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-dlhzj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Mar 19 04:09:39 minikube localkube[3059]: I0319 04:09:39.541558    3059 kuberuntime_manager.go:758] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-77d8b98585-jrkrx_kube-system(6d690a84-2b27-11e8-9e9e-08002783a0e8)"
Mar 19 04:09:39 minikube localkube[3059]: I0319 04:09:39.541695    3059 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-77d8b98585-jrkrx_kube-system(6d690a84-2b27-11e8-9e9e-08002783a0e8)
Mar 19 04:09:39 minikube localkube[3059]: E0319 04:09:39.541738    3059 pod_workers.go:186] Error syncing pod 6d690a84-2b27-11e8-9e9e-08002783a0e8 ("kubernetes-dashboard-77d8b98585-jrkrx_kube-system(6d690a84-2b27-11e8-9e9e-08002783a0e8)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-77d8b98585-jrkrx_kube-system(6d690a84-2b27-11e8-9e9e-08002783a0e8)"
Mar 19 04:09:52 minikube localkube[3059]: I0319 04:09:52.544060    3059 kuberuntime_manager.go:514] Container {Name:kubedns Image:k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.5 Command:[] Args:[--domain=cluster.local. --dns-port=10053 --config-map=kube-dns --v=2] WorkingDir: Ports:[{Name:dns-local HostPort:0 ContainerPort:10053 Protocol:UDP HostIP:} {Name:dns-tcp-local HostPort:0 ContainerPort:10053 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:10055 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:PROMETHEUS_PORT Value:10055 ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-config ReadOnly:false MountPath:/kube-dns-config SubPath: MountPropagation:<nil>} {Name:default-token-dlhzj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/kubedns,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readiness,Port:8081,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Mar 19 04:09:52 minikube localkube[3059]: I0319 04:09:52.545015    3059 kuberuntime_manager.go:514] Container {Name:dnsmasq Image:k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.5 Command:[] Args:[-v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:150 scale:-3} d:{Dec:<nil>} s:150m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-config ReadOnly:false MountPath:/etc/k8s/dns/dnsmasq-nanny SubPath: MountPropagation:<nil>} {Name:default-token-dlhzj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/dnsmasq,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Mar 19 04:09:52 minikube localkube[3059]: I0319 04:09:52.545332    3059 kuberuntime_manager.go:758] checking backoff for container "kubedns" in pod "kube-dns-54cccfbdf8-xs98g_kube-system(6d83a638-2b27-11e8-9e9e-08002783a0e8)"
Mar 19 04:09:52 minikube localkube[3059]: I0319 04:09:52.545513    3059 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kubedns pod=kube-dns-54cccfbdf8-xs98g_kube-system(6d83a638-2b27-11e8-9e9e-08002783a0e8)
Mar 19 04:09:52 minikube localkube[3059]: I0319 04:09:52.545530    3059 kuberuntime_manager.go:758] checking backoff for container "dnsmasq" in pod "kube-dns-54cccfbdf8-xs98g_kube-system(6d83a638-2b27-11e8-9e9e-08002783a0e8)"
Mar 19 04:09:52 minikube localkube[3059]: I0319 04:09:52.545645    3059 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-54cccfbdf8-xs98g_kube-system(6d83a638-2b27-11e8-9e9e-08002783a0e8)
Mar 19 04:09:52 minikube localkube[3059]: E0319 04:09:52.545731    3059 pod_workers.go:186] Error syncing pod 6d83a638-2b27-11e8-9e9e-08002783a0e8 ("kube-dns-54cccfbdf8-xs98g_kube-system(6d83a638-2b27-11e8-9e9e-08002783a0e8)"), skipping: [failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubedns pod=kube-dns-54cccfbdf8-xs98g_kube-system(6d83a638-2b27-11e8-9e9e-08002783a0e8)"
Mar 19 04:09:52 minikube localkube[3059]: , failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-54cccfbdf8-xs98g_kube-system(6d83a638-2b27-11e8-9e9e-08002783a0e8)"
Mar 19 04:09:52 minikube localkube[3059]: ]

Ok, it looks like that hack on ifconfig is what was causing the problem.

If I switch it back to 172.17.0.1 netmask 255.255.0.0, my service works fine.

The reason I need to do the ifconfig hack, is because otherwise my pods can't access the internet (for pulling external images).

Any suggestions for how I otherwise resolve this?

For now I'm using I'm just using

$ minikube ssh
$ sudo ifconfig docker0 172.17.0.1 netmask 255.255.255.0
$ exit 

It seems to working fine.

The alternative is to use the -- docker-opt bip=172.18.0.1/16 flag when starting minikube.

I am stuck in exposing service. let's take example of your service.

minikube returned http://192.168.99.114:30623 url of your service after exposing as NodePort.

On Internet, I see everyone getting 127.0.0.1: after exposing. Is there any problem in new version of minikube or it is on purpose. What should I do to expose my service to be accessed by external network

Was this page helpful?
0 / 5 - 0 ratings