Minikube: How to access service running on host OS from inside minikube VM?

Created on 17 Apr 2018  路  36Comments  路  Source: kubernetes/minikube

Is it possible to access services running on the host from pod created by minikube with hyperkit/xhyve driver? I am especially interested in minikube with hyperkit/xhyve driver on macOS.
Something like

arenetworking help wanted kindocumentation lifecyclfrozen prioritbacklog 2019q2

Most helpful comment

Would a minikube addon be an idea?
Which, when enabled, would introduce a service along the lines of:

apiVersion: v1
kind: Service
metadata:
  name: minikube-host
spec:
    type: ExternalName
    externalName: 192.168.99.1

This is pretty trivial ofc. but having it as addon would have the added value of being an easy to document step, instead of describing how to look up the host ip, setup the service etc.

All 36 comments

Are there any news on this? I need something similar to be able to create a small development proxy server that we currently use with docker-compose to start some microservices locally while most of the infrastructure runs within Kubernetes.

Docker's Kubernetes implementation has support for this.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Agreed, we should make this easy to do by default across platforms. I'm a little iffy on making it the default due to the possibility of lateral movement from a compromised container, but we should at least make it an easy option.

https://medium.com/tarkalabs/proxying-services-into-minikube-8355db0065fd does a pretty good job of explaining one method of forwarding ports from the host OS into minikube. We should do our own, as well.

@tstromberg we are actually working on a solution for Minishift that places a small proxy inside the VM, which is also used to overcome the issue with self-signed or custom CA signed certificates. while ssh or sshuttle works, this is not really easy for all platforms...

https://github.com/minishift/minishift/issues/2788 has some of the findings and thoughts on this.

Would a minikube addon be an idea?
Which, when enabled, would introduce a service along the lines of:

apiVersion: v1
kind: Service
metadata:
  name: minikube-host
spec:
    type: ExternalName
    externalName: 192.168.99.1

This is pretty trivial ofc. but having it as addon would have the added value of being an easy to document step, instead of describing how to look up the host ip, setup the service etc.

https://medium.com/tarkalabs/proxying-services-into-minikube-8355db0065fd does a pretty good job of explaining one method of forwarding ports from the host OS into minikube. We should do our own, as well.

I tried to get this working on my mac and followed the above instructions. I was able to connect out from the minikube VM itself, but not from containers running in it. So I tried the kubernetes provided by Docker for Mac and the docker.for.mac.localhost DNS entry worked perfectly in containers. I don't know if that's seen as a "competitor" for minikube, but making it that easy is at least a higher bar to shoot for.

Thoughts about exposing the host machine's IP via a minikube environment var?

@gaziqbal - That sounds like a reasonable place to start, assuming if the IP is routable (it at least is in kvm2). I'd be happy to approve any PR's which implement this. Help wanted!

Any news on this issue?

Would a minikube addon be an idea?
Which, when enabled, would introduce a service along the lines of:

apiVersion: v1
kind: Service
metadata:
  name: minikube-host
spec:
    type: ExternalName
    externalName: 192.168.99.1

This is pretty trivial ofc. but having it as addon would have the added value of being an easy to document step, instead of describing how to look up the host ip, setup the service etc.

This is a wonderful suggestion. There is often a need to connect to a local database running on the host during development. It would make creating tutorials with minikube straightforward if we had some sort of addon like this.

Without needing any add-ons, this is a solution that works for me:

kind: Service
apiVersion: v1
metadata:
  name: minikube-host
spec:
  type: ExternalName
  externalName: minikube.host 

The idea is that instead of specifying the IP, you'll need to use a DNS name, which is what ExternalName service type supports. And all we need to do is adding this line to the etc/hosts file so that it can resolve it:

10.0.2.2        minikube.host

That 10.0.2.2 is for VirtualBox. As mentioned by others in the thread, for hyperkit/xhyve, you may need to use 192.168.99.1, though I haven't tested these drivers.


Edit:

Actually, you can use ip addr or ifconfig to determine the host IP. The IP address under vboxnet1 is the IP that you need to access the host from within a pod. See https://github.com/kubernetes/minikube/blob/master/docs/accessing_etcd.md.

I am struggling with kvm2 as it has a dynamic host IP. Enabling GatewayPorts makes me able to curl a service from host during my minikube ssh session:

ssh -i $(minikube ssh-key) docker@$(minikube ip) -R 9200:localhost:9200
curl localhost:9200

{
"name" : "ZOym_lv",
"cluster_name" : "log",
"cluster_uuid" : "BGCPx-nXRL-MUVW1pMLD4w",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "2018-04-12T20:37:28.497551Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

But.. when a POD tries to connect to localhost:9200 it can't connect.

I also tried adding this ExternalName Service and curl minikube-host:9200, minikube.host:9200 or even localhost:9200. Didn't make any difference.

$ curl minikube-host:9200
curl: (6) Could not resolve host: minikube-host
$ curl minikube.host:9200
curl: (6) Could not resolve host: minikube.host

Another try was let ssh running and manually editing the curl with minikube ip:

ssh -Ni $(minikube ssh-key) docker@$(minikube ip) -R 9200:localhost:9200` running.
curl 192.168.39.11:9200

It works, the problem is that there is no environment variable or a way TTBOMK to make the POD know its host ip. Maybe adding it to the /etc/hosts on the VM creation could be a way around.

I found another way of accessing the service from the VM. Inside /etc/resolv.conf of the VM there is this nameserver line with the host ip. If you add this to /etc/hosts it can resolve the name:

minikube ssh
sudo su -c 'echo -e "$(cat /etc/resolv.conf | grep nameserver | cut -d\  -f2)\tminikube.host" >> /etc/hosts'

Now you can curl a service with: curl minikube.host:9200

I also enabled GatewayPorts on /etc/ssh/sshd_config, reloaded sshd, added the ExternalName Service and my PODs still can't curl minikube-host. Help needed.

FYI kvm2 driver seems to use 192.168.122.1 by default, you can check it via virt-manager.

@mati865 I tried that.. from minikube ssh I can access the host, but not from the PODs.

@staticdev it works for me, what driver are you using?

@mati865 I am using kvm2.. sorry actually it works calling the IP directly. But I don't know why it doesn't work using name (/etc/hosts or ExternalName Service).

Having a similar problem with virtualbox driver. It looks like resolving is not working as expected. My service that exposes host inside cluster is the following.

apiVersion: v1
kind: Service
metadata:
  name: minikube-host
spec:
  type: ExternalName
  externalName: 192.168.99.1

Getting the following error:

curl -vvv http://minikube-host:6005
* Rebuilt URL to: http://minikube-host:6005/
* Could not resolve host: minikube-host
* Closing connection 0
curl: (6) Could not resolve host: minikube-host

If I added it manually on pod's /etc/hosts works as expected.

192.168.99.1 minikube-host
 curl -vvv http://minikube-host:6005
* Rebuilt URL to: http://minikube-host:6005/
*   Trying 192.168.99.1...
* TCP_NODELAY set
* Connected to minikube-host (192.168.99.1) port 6005 (#0)
> GET / HTTP/1.1
> Host: minikube-host:6005
> User-Agent: curl/7.52.1
> Accept: */*



md5-96829dc62cf84be80d5a1a6b8cb4f719



curl -vvv 192.168.99.1:6005
* Rebuilt URL to: 192.168.99.1:6005/
*   Trying 192.168.99.1...
* TCP_NODELAY set
* Connected to 192.168.99.1 (192.168.99.1) port 6005 (#0)
> GET / HTTP/1.1
> Host: 192.168.99.1:6005
> User-Agent: curl/7.52.1
> Accept: */*



md5-48b67717fc557ae00b548e84acd6d8f5



 curl -vvv 10.0.2.2:6005
* Rebuilt URL to: 10.0.2.2:6005/
*   Trying 10.0.2.2...
* TCP_NODELAY set
* Connected to 10.0.2.2 (10.0.2.2) port 6005 (#0)
> GET / HTTP/1.1
> Host: 10.0.2.2:6005
> User-Agent: curl/7.52.1
> Accept: */*



md5-bcccea504d63d120b98634381b4c7930



 curl -vvv 10.0.2.2:6005
> GET / HTTP/1.1
> Host: 10.0.2.2:6005
> User-Agent: curl/7.60.0
> Accept: */*
> 



md5-48b67717fc557ae00b548e84acd6d8f5



curl -vvv 192.168.1.1:6005
^C
$ curl -vvv 192.168.99.1:6005
> GET / HTTP/1.1
> Host: 192.168.99.1:6005
> User-Agent: curl/7.60.0
> Accept: */*

Any ideas would be appreciated.

The problem is that you shouldn't be using ExternalName, as that does a lookup in DNS. The solution is to use this:

apiVersion: v1
kind: Service
metadata:
  name: minikube-host
  namespace: default
spec:
  ports:
  - protocol: TCP
    port: 80
---
apiVersion: v1
kind: Endpoints
metadata:
  name: minikube-host
subsets:
  - addresses:
      - ip: ${MINIKUBE_IP}
    ports:
      - port: <port>

@teejae what's the ${MINIKUBE_IP} and <port> above? Is that the IP for the VM? And how about port?

@teejae Thanks, initially I tried this but I declared on service also targetPort and was not working. Thanks again for the clarification.
@yzhong52 on minikube ip i believe it will work either with 192.168.99.1 or 10.0.2.2 if you are using virtualbox driver. And on the port use the port on host that your service runs. In my example above 6005

Now documented:

https://minikube.sigs.k8s.io/docs/tasks/accessing-host-resources/

Please feel free to improve it.

@tstromberg in the linked documentation, it says

Prerequisites
The service running on your host must either be bound to all IP鈥檚 (0.0.0.0) and interfaces, or to the IP and interface your VM is bridged against. If the service is bound only to localhost (127.0.0.1), this will not work.

Is this the default setup, and if not how do we "bind the service to all IPs and interfaces"?

@dannyharding10, the "default" setup will be dependent on the service on your host OS you are trying to connect to from inside minikube. For instance postgres will by default only bind to localhost. To make the postgres service "bound to all IP's" you have to:

  • add the line listen_addresses = '*' to postgres.conf
  • add the line host all all 0.0.0.0/0 md5 to pg_hba.conf
  • restart the postgres service

Then you can use the technique (minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'" in the linked documentation to get the IP of the bridge IP. The bridge IP will be the IP you connect to inside minikube.

Note: You should be careful about opening up postgres to all remote IP. A safer way would be to only allow IPs from the minikube environment. For instance inside my minikube environment my bridge ip is 192.168.64.1. I added the line host all all 192.168.64.1/24 md5 to pg_hba.conf.

hope this helps.

The command mentioned in the link above doesn't explain how to retrieve the ip address when you start the minikube cluster with docker as vm driver. I get the following error message:

minikube start --driver=docker

minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'"
bash: route: command not found

Please suggest.

@codingkapoor you shouldn't call route directly but rather ip route.

In minikube v1.10 we introduced a new host name you can use to access the host OS: host.minikube.internal

https://minikube.sigs.k8s.io/docs/handbook/host-access/ has been updated appropriately.

@tstromberg
I don't see any ip addr against host.minikube.internal entry in the /etc/hosts file.

docker@minikube:~$ cat /etc/hosts
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3  minikube
<nil>   host.minikube.internal

Also, when I try ping command as suggested in the doc you mentioned:

docker@minikube:~$ ping host.minikube.internal
-bash: ping: command not found

How I installed and started minikube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64   && chmod +x minikube
sudo install minikube /usr/bin/

$ minikube version
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd

minikube start --driver=docker

What am I missing here. Please suggest.

Good to know that ping doesn't ship in the Docker driver. I'll open an issue as far as the hosts entry being nil though.

@tstromberg Is there any work around this issue or we have to wait for this #8369 fix?

@tstromberg
I was trying to make this connection but I had no luck to get from minikube to the host OS following the documentation.

A bit of context:
OS Mojave 10.14
Hypervisor Virtual Box
Currently I am having a VPN, therefor I use a CIDR flag to run minikube in this address: 172.16.0.1/24
minikube version: v1.11.0

I tried to investigate a bit what could had happen. So this is what I see here:

minikube hosts file has the following address as host.minikube.internal
127.0.0.1 host.minikube.internal

I tried to debug a bit the situation. And then I see the following with minikube logs:

Jun 10 17:04:45 minikube kubelet[4145]: W0610 17:04:45.293816 4145 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/[container-name]-68fdb9cdb6-8tnsn through plugin: invalid network status for

I see that this message is not given inside of minikube, but actually this comes from kubelet, and therefor inside of the kubernetes project.

In this case this is the signature of the dockershim/docker_sandbox.go of kubernetes github project:

func (ds *dockerService) getIPsFromPlugin(sandbox *dockertypes.ContainerJSON) ([]string, error)
So I believe, minikube github project delegated the "get" of the IP to kubernetes project using this function, but at some point kubernetes does not have the information that he needs to provide the IP, and then I get this localhost in my hosts file, and also probably you get nil in the hosts file too for another combination of hypervisors, possibly related to: 8369

I guess in my case, the hosts file should have a 10.0.2.2 for the host.minikube.internal

$ route -n | grep ^0.0.0.0 | awk '{ print $2 }'
10.0.2.2

I was able to run a curl 10.0.2.2 and get from my OS a response from my node express server rised in the ip 0.0.0.0:80

But certainly we have different operating systems in the team, and having this resolved in hosts file would help a lot.

Please consider this as an analysis from someone without a deep knowledge in Go or this two projects, that's the reason I am not able to give further help. But in case I can provide further logs or help with some testing to have this feature up and running let me know.

I have been following this tutorial here: https://kubernetes.io/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/

I adapated filebeat configuration to use the new hostname host.minikube.internal when trying to connect to kibana but recieved the following host not found error.

2020-07-28T14:11:40.056Z        ERROR   instance/beat.go:933    Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://host.minikube.internal:5601/api/status fails: fail to execute the HTTP GET request: Get http://host.minikube.internal:5601/api/status: lookup host.minikube.internal on 10.96.0.10:53: no such host. Response: .
Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://host.minikube.internal:5601/api/status fails: fail to execute the HTTP GET request: Get http://host.minikube.internal:5601/api/status: lookup host.minikube.internal on 10.96.0.10:53: no such host. Response: .

Strangely when using minikube ssh i am able to curl the above problematic URL as expected.

$ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ curl http://host.minikube.internal:5601/api/status
{ .. some valid response object here }

Does the hostname host.minikube.internal not propagate to containers running inside of a minikube cluster? I assumed it would?

The only way i was able to get the above error resolved was to add a hostAlias to filebeat DaemonSet like so

hostAliases:
  - ip: "192.168.64.1"
    hostnames:
      - "host.minikube.internal"

I guess to summarise, why can the minikube VM access host.minikube.internal but containers created within not?

_Current setup_

  • Minikube Version: v1.12.1
  • Docker for Mac Docker Engine: 19.03.12
  • Kubernetes local: v1.18.6 (not bundled with docker for mac, installed via homebrew)
  • Kubernetes server: v1.18.3

EDIT: Just so we're clear. Kibana has been configured to listen on: 0.0.0.0 so it can accept remote connections.

Was this page helpful?
0 / 5 - 0 ratings