BUG REPORT
Minikube version: v0.23.0
What happened:
NFS Mount fails because of broken DNS / kuberntes nfs service name could not be resolved
What you expected to happen:
NFS Volume is mounted
How to reproduce it (as minimally and precisely as possible):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mydeployment
namespace: mynamespace
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
component: services
spec:
volumes:
- name: data
nfs:
server: kubernetes-nfs-service
path: /
* Workaround *:
Possible fix:
Could you please try to use the full DNS name for the service, e.g., kubernetes-nfs-service-default.svc.cluster.local
Please check https://github.com/kubernetes/dns/blob/master/docs/specification.md
and let me know if you still have issues of mounting. Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@jingxu97 minikube still doesn't resolve the FQDN address but GKE does.
minikube version: v0.28.0
I applied the same PersistentVolume yaml (see below) to minikube and GKE cluster. On minikube, mounting volume always failed.
The kubectl describe pod command to minikube pod showed the following error message at Events part.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned my-app-9bf649db7-fnb2f to minikube
Normal SuccessfulMountVolume 50s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-f9ql6"
Warning FailedMount 50s kubelet, minikube MountVolume.SetUp failed for volume "my-app-data" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/f58175de-6fb1-11e8-88ab-080027a4eadc/volumes/kubernetes.io~nfs/my-app-data --scope -- mount -t nfs my-nfs-server.my-ns.svc.cluster.local:/ /var/lib/kubelet/pods/f58175de-6fb1-11e8-88ab-080027a4eadc/volumes/kubernetes.io~nfs/my-app-data
Output: Running scope as unit: run-ra9b82c5f871c40ab9ae41a1d53d4d5b4.scope
mount.nfs: Failed to resolve server my-nfs-server.my-ns.svc.cluster.local: Name or service not known
(snip)
pv-yaml: => this worked on GKE
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-app-data
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
nfs:
server: my-nfs-server.my-ns.svc.cluster.local
path: "/"
/reopen
/remove-lifecycle rotten
Tested on GKE and minikube. And I can confirm @dbaba results.
EDIT: After some more digging, I think this is related to the fact that /etc/resolv.conf doesn't contain the IP address of kube-dns.
In other words, when the kubelet tries to mount the volume and resolves my-nfs-server.my-ns.svc.cluster.local, it fails because there isn't a domain resolver.
I tried to edit /etc/resolv.conf to include the IP address of kube-dns and I can finally resolve the name correctly.
Before the change:
minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cat /etc/resolv.conf
nameserver 10.0.2.3
$ nslookup nfs-server.default.svc.cluster.local
Server: 10.0.2.3
Address 1: 10.0.2.3 10.0.2.3
nslookup: can't resolve 'nfs-server.default.svc.cluster.local'
After adding kube-dns to /etc/resolv.conf
$minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cat /etc/resolv.conf
nameserver 172.17.0.2
nameserver 10.0.2.3
$ nslookup nfs-server.default.svc.cluster.local
Server: 172.17.0.2
Address 1: 172.17.0.2 kube-dns-86f4d74b45-8g6dk
Name: nfs-server.default.svc.cluster.local
Address 1: 10.109.11.156 nfs-server.default.svc.cluster.local
@danielepolencic: you can't re-open an issue/PR unless you authored it or you are assigned to it.
In response to this:
/reopen
/remove-lifecycle rottenTested on GKE and minikube. And I can confirm @dbaba results.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Why this issue does not receive attention? Prevents of properly test some setups involving shared volumes. Not sure what to do here... any thoughts?
Just a side note here, I managed to workaround this issue by changing the systemd-resolved configuration. Steps:
$ minkube ssh
$ su
[Resolve] section$ echo "DNS=172.17.0.2" >> /etc/systemd/resolved.conf
$ systemctl daemon-reload
$ systemctl restart systemd-networkd
$ systemctl restart systemd-resolved
Possible issues:
Most helpful comment
Just a side note here, I managed to workaround this issue by changing the systemd-resolved configuration. Steps:
[Resolve]sectionPossible issues: