Minikube: --nfs-share=$HOME: mount.nfs: Connection reset by peer

Created on 17 Aug 2019  路  10Comments  路  Source: kubernetes/minikube

Apparently setting --nfs-share break the VM and/or hyperkit networking in a bad way:

(minikube) 192.168.64.58
(minikube) DBG | Using SSH client type: external
(minikube) DBG | Using SSH private key: /Users/tstromberg/.minikube/machines/minikube/id_rsa (-rw-------)
(minikube) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null [email protected] -o IdentitiesOnly=yes -i /Users/tstromberg/.minikube/machines/minikube/id_rsa -p 22] /usr/local/bin/ssh <nil>}
(minikube) DBG | About to run SSH command:
(minikube) DBG | echo -e "#/bin/bash\nsudo mkdir -p /nfsshares//Users/tstromberg\nsudo mount -t nfs -o noacl,async 192.168.64.1:/Users/tstromberg /nfsshares//Users/tstromberg\n" | sh
(minikube) DBG | SSH cmd err, output: exit status 32: mount.nfs: Connection reset by peer
(minikube) DBG | 
(minikube) DBG | NFS setup failed: ssh command error:
(minikube) DBG | command : echo -e "#/bin/bash\nsudo mkdir -p /nfsshares//Users/tstromberg\nsudo mount -t nfs -o noacl,async 192.168.64.1:/Users/tstromberg /nfsshares//Users/tstromberg\n" | sh
(minikube) DBG | err     : exit status 32
(minikube) DBG | output  : mount.nfs: Connection reset by peer
(minikube) DBG | 
E0816 15:57:18.213434   79013 start.go:723] StartHost: create: Error creating machine: Error in driver during machine creation: ssh command error:
command : echo -e "#/bin/bash\nsudo mkdir -p /nfsshares//Users/tstromberg\nsudo mount -t nfs -o noacl,async 192.168.64.1:/Users/tstromberg /nfsshares//Users/tstromberg\n" | sh
err     : exit status 32
output  : mount.nfs: Connection reset by peer
I0816 15:57:18.213912   79013 utils.go:127] non-retriable error: create: Error creating machine: Error in driver during machine creation: ssh command error:
command : echo -e "#/bin/bash\nsudo mkdir -p /nfsshares//Users/tstromberg\nsudo mount -t nfs -o noacl,async 192.168.64.1:/Users/tstromberg /nfsshares//Users/tstromberg\n" | sh
err     : exit status 32
output  : mount.nfs: Connection reset by peer
W0816 15:57:18.213998   79013 exit.go:99] Unable to start VM: create: Error creating machine: Error in driver during machine creation: ssh command error:
command : echo -e "#/bin/bash\nsudo mkdir -p /nfsshares//Users/tstromberg\nsudo mount -t nfs -o noacl,async 192.168.64.1:/Users/tstromberg /nfsshares//Users/tstromberg\n" | sh
err     : exit status 32
output  : mount.nfs: Connection reset by peer

馃挘  Unable to start VM: create: Error creating machine: Error in driver during machine creation: ssh command error:
command : echo -e "#/bin/bash\nsudo mkdir -p /nfsshares//Users/tstromberg\nsudo mount -t nfs -o noacl,async 192.168.64.1:/Users/tstromberg /nfsshares//Users/tstromberg\n" | sh
err     : exit status 32
output  : mount.nfs: Connection reset by peer


馃樋  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
馃憠  https://github.com/kubernetes/minikube/issues/new/choose

Good thing we don't document this feature. :|

aremount chyperkit help wanted kinbug lifecyclrotten prioritbacklog

Most helpful comment

No progress, help wanted.

IMHO, the best thing for this flag would be to retire it and move NFS functionality into "minikube mount": #4324

All 10 comments

Im able to run the nfs mounts no problem, but my colleague is able to reproduce this issue. Any discoveries made here?
Running minikube v1.2.0
Kubernetes v1.15

I have the same issue.

No progress, help wanted.

IMHO, the best thing for this flag would be to retire it and move NFS functionality into "minikube mount": #4324

I ended up not using the --nfs-share switch as it is not reliable and manually adding the nfs mounts in a bash script.
We've had a couple issues with the 9p filesystem, the biggest issue being how slow it is. NFS was the answer to all.

This is still an issue in v1.5 as far as I know.

Once #4324 is resolved we will likely drop the --nfs-share feature if it's still not working.

this seems still be the issue in 1.6.1

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings