What happened:
I've had a dns problem, the pod container can not resolve the external network dns like github.com.
There's a similar issue in kubernetes repository https://github.com/kubernetes/kubernetes/issues/64924, but I'm not sure the problem is the same with it.
In my case, it only happened on alpine based image, it works on Debian based image in kind k8s cluster, but it works on docker desktop k8s cluster, so I'm consufed.

What you expected to happen: I think it should resolve the external dns correctly
How to reproduce it (as minimally and precisely as possible):
you can use this Dockerfile to build a image or use weihanli/accountingapp:latest for alpine based image to have a test
Anything else we need to know?:
Environment:
kind version): 0.2.1Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-20T16:54:39Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Docker version: (use docker info):
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 36
Server Version: 18.06.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 5.0.7-050007-generic
Operating System: Ubuntu 18.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.765GiB
Name: my-ubuntu
ID: GB5R:7P7A:O4VF:MH5T:NUVS:LQTC:7XGX:FS26:RN6X:32I7:6GGQ:TDNE
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
I guess this problem has nothing to do with kind, it is probably due to the problem of musl based on alpine image. https://bugs.alpinelinux.org/issues/9017
and you can try to use nslookup to get more detail.
/triage needs-information
nslookup info

should I close this issue for a alpine related issue?
right. please see: https://github.com/gliderlabs/docker-alpine/issues/476
OK, thanks @tao12345666333
You may need https://github.com/kubernetes/kubernetes/commit/4274c426cee6f6d1e42ce194952190c587888e7d if you're using go in your Alpine image.
Or rather: an etc/nsswitch.conf entry. Your Debian image most likely has one.
https://github.com/kubernetes/kubernetes/pull/69238 and https://github.com/kubernetes/kubernetes/issues/69195 have some more discussion. This issue is generic to alpine (non glibc) images with go binaries
Most helpful comment
https://github.com/kubernetes/kubernetes/pull/69238 and https://github.com/kubernetes/kubernetes/issues/69195 have some more discussion. This issue is generic to alpine (non glibc) images with go binaries