Minikube: host.minikube.internal not visible in containers

Created on 10 Jun 2020  ยท  12Comments  ยท  Source: kubernetes/minikube

Hi,

The subtitle of article is "How to access host resources from a pod" but as far as I can tell, it does not demonstrate how to actually do it from inside a pod? For me it is unclear how to get the host.minikube.internal IP from inside a pod actually.

(My scenario is I run a Postgres install on local machine (bound to 0.0.0.0:5432) and would like a service from inside pod to connect to it.)

I can't figure out how to resolve host.minikube.internal from inside pod.

Here's an example:

$ minikube start --memory=16384 --cpus=4
๐Ÿ˜„  minikube v1.11.0 on Ubuntu 20.04
    โ–ช MINIKUBE_ACTIVE_DOCKERD=minikube
โœจ  Using the kvm2 driver based on user configuration
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ”ฅ  Creating kvm2 VM (CPUs=4, Memory=16384MB, Disk=20000MB) ...
๐Ÿณ  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
๐Ÿ”Ž  Verifying Kubernetes components...
๐ŸŒŸ  Enabled addons: default-storageclass, storage-provisioner
๐Ÿ„  Done! kubectl is now configured to use "minikube"
$ minikube ssh
$ ping host.minikube.internal
PING host.minikube.internal (192.168.39.1): 56 data bytes

OK, but try it from inside a pod.

$ kubectl run -ti --image=busybox --restart=Never busybox
If you don't see a command prompt, try pressing enter.
/ # ping host.minikube.internal
ping: bad address 'host.minikube.internal'
/ # host host.minikube.internal
sh: host: not found
/ # nslookup host.minikube.internal
Server:     10.96.0.10
Address:    10.96.0.10:53

** server can't find host.minikube.internal: NXDOMAIN

*** Can't find host.minikube.internal: No answer

/ # cat /etc/resolv.conf 
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # cat /etc/hosts 
# Kubernetes-managed hosts file.
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
172.17.0.4  busybox

Nothing. OK, I saw that busybox can have issues with DNS lookups, let's try something else:

$ kubectl run -ti --image=amazonlinux --restart=Never bash   
If you don't see a command prompt, try pressing enter.
bash-4.2# yum install iputils bind-utils net-tools
[...]
bash-4.2# ping host.minikube.internal
ping: host.minikube.internal: Name or service not known
bash-4.2# nslookup host.minikube.internal
Server:     10.96.0.10
Address:    10.96.0.10#53

** server can't find host.minikube.internal: NXDOMAIN

bash-4.2# cat /etc/resolv.conf 
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
bash-4.2# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
172.17.0.4  bash

Are my assumptions wrong?

If I knew the IP, it would work -

bash-4.2# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

bash-4.2# telnet 192.168.39.1 5432
Trying 192.168.39.1...
Connected to 192.168.39.1.
Escape character is '^]'.

The port 5432 on host is open and accepting connections.

arenetworking kinbug prioritimportant-longterm

Most helpful comment

This absolutely should work.

My understanding is that the kubelet is supposed to propagate /etc/hosts from the VM into the pod at creation time. It appears this may not always be the case.

All 12 comments

@valters is your question how to access a service outside minikube from inside a pod in minikube?

and are you following an article in internet ? is there a link for that article ?

we dont have an integration test for such a feature, and it really depends on the driver and platform and I am not sure if we should advertise that as a feature of minikbue in minikube's internal code we have a way for each driver.

do you mind sharing why this would be a common use case for local kuberentes experience?

Hi @medyagh , I am running minikube on Ubuntu 20.04 with kvm2 driver. Other folks on team are using macOS (Mojave, 10.14.6 ), with VirtualBox driver. (They report less success with HyperKit driver).

The article I am talking about is https://minikube.sigs.k8s.io/docs/handbook/host-access/ , I created this ticket by clicking the link "Create documentation issue" there.

We are running some services on local machine. Postgres is listening on port 5324 on local host. The Postgres port is only one example. We also spin up kafka in a docker container locally, jaeger in a container as a stand-in for full APM collector, etc. These will only exist on local development environment, because in staging/production we will connect to Postgres instance in RDS, or proper Kafka instance in cloud, and so on.

I am having issue getting our service (which we are bringing up in a pod) to talk to Postgres instance on local host. I can get it working by hardcoding the host IP (192.168.39.1 on my machine). I am trying to understand how to make this local configuration generic enough so that all team members could use it - also on macOS with VirtualBox.

My understanding from reading the handbook/host-access/ article is that the DNS name host.minikube.internal should be resolvable within a running pod. Could you clarify?

Having the exact same problem here! Tried to configure a postgresdb url with host.minikube.internal but it only works when using the hardcoded host ip. Using telnet as described here https://minikube.sigs.k8s.io/docs/handbook/host-access/ does connect.

I have the same use case as @valters. Using GCP CloudSQL in remote k8s environment, but would like to run postgres on the host when deploying to local minikube.

I'm fairly new to minikube, so not sure if this is an anti-pattern. If there is a recommended way to do this, it's not clear in the documentation.

That minikube article only talks about how to access the host from the VM, not from the pods...

It should probably use something like HostAliases for that, to change the /etc/hosts file of the pod:

https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/

The misleading sentence seems to be at the top of page: "How to access host resources from a pod"

I'm not sure if there is a command to return this address, for use with such commands or specs...

Suppose a workaround is: minikube ssh grep host.minikube.internal /etc/hosts | cut -f1

The documentation should probably change the headline to "How to access host resources from VM"

And then add another subsection, on how to access host resources from pods (like the link above).

This absolutely should work.

My understanding is that the kubelet is supposed to propagate /etc/hosts from the VM into the pod at creation time. It appears this may not always be the case.

minikube ssh and telnet works for me. Here's my workaround:

  1. Get the minikube internal ip
minikube ssh grep host.minikube.internal /etc/hosts | cut -f1
  1. Edit your deployment.yaml and add this line:
...
    spec:
      hostAliases:
      - ip: "192.168.64.1"
        hostnames:
        - "host.minikube.internal"
...

Neither telnet nor ping are available in the VM:

โฏ minikube ssh
docker@minikube:~$ telnet
-bash: telnet: command not found
docker@minikube:~$ ping
-bash: ping: command not found
docker@minikube:~$

docker@minikube:~$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04

So the docs are now out of date I guess.

Inside a pod there is no host.minikube.internal

cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
172.17.0.8  backend-deployment-5cb8568cd6-f6jt2

Hello!
Also hitting the bug: OSX, minikube 1.14 (Kubernetes 1.19), VirtualBox driver.
If it is a possibility, I'd like to ask to prioritize fixing this bug as it may manifest also in situations other than described above and makes it difficult (if possible at all) developing certain Kubernetes scenarios.
For example:
Let us suppose we are going to deploy multitier app [onto Kubernetes] and this app will have "external" data source (in production that external data source will be, say, RDS instance, but during development local host' Postgres DB is used). One possible Kubernetes scenario would be to abstract such external data source by ExternalName service:

apiVersion: v1
kind: Service
metadata:
  name: my-db-service
  namespace: dev
spec:
  type: ExternalName
  externalName: host.minikube.internal

[with the intention to replace host.minikube.internal with RDS endpoint in Production]
Now due to this bug, developing/testing this scenario in minikube is somewhat difficult and workaround described here won't [directly] work.
In short, it would be nice to fix this bug sooner rather than later :smile:

@crucialfelix

Neither telnet nor ping are available in the VM:

Yeah, noticed that too ๐Ÿ˜„
However, nslookup is still there (if I remember correctly). Another way would be to cat /etc/hosts while in minikube's shell - you should be able to see host.minikube.internal entry.

Inside a pod there is no host.minikube.internal

Indeed, VM's entry is not propagated to pod(s) - that is the essence of this bug/issue.

Another use case: when running a php(-fpm) application in a pod we want to connect to an IDE for debugging purposes. To accomplish this today I have to hardcode host.minikube.internal's ip address.

Was this page helpful?
0 / 5 - 0 ratings