Minikube: inotify not working with xhyve and minikube-iso combination

Created on 15 Nov 2016  路  6Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Minikube version (use minikube version):

21:33 $ minikube version
minikube version: v0.12.2

Environment:

  • OS: Darwin /redacted/ 16.1.0 Darwin Kernel Version 16.1.0: Thu Oct 13 21:26:57 PDT 2016; root:xnu-3789.21.3~60/RELEASE_X86_64 x86_64
  • VM Driver: "DriverName": "xhyve"
  • Docker version: Docker version 1.12.3, build 6b644ec
  • Install tools: homebrew
  • Others:

What happened:
I launched a new minikube instance using the following command:

21:36 $ minikube start --vm-driver=xhyve --iso-url=http://storage.googleapis.com/minikube/iso/buildroot/minikube-v0.0.6.iso
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.

The first time I did this, the /Users volume was not even mounted into the minikube VM. I ran minikube stop and minikube delete, then repeated the above command to create the driver, and got this error, instead:

21:34 $ minikube start --vm-driver=xhyve --iso-url=http://storage.googleapis.com/minikube/iso/buildroot/minikube-v0.0.6.iso
Starting local Kubernetes cluster...
E1114 21:35:20.188417   34831 start.go:92] Error starting host: Error creating host: Error creating machine: Error running provisioning: Something went wrong running an SSH command!
command : printf '%s' '-----BEGIN CERTIFICATE-----
/redacted/have it in a note, so let me know if you need it/
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err     : exit status 1
output  : -----BEGIN CERTIFICATE-----
/redacted/
-----END CERTIFICATE-----
tee: /etc/docker/ca.pem: No such file or directory

. Retrying.
E1114 21:35:20.190463   34831 start.go:98] Error starting host:  Error creating host: Error creating machine: Error running provisioning: Something went wrong running an SSH command!
command : printf '%s' '-----BEGIN CERTIFICATE-----
/redacted/
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err     : exit status 1
output  : -----BEGIN CERTIFICATE-----
/redacted/
-----END CERTIFICATE-----
tee: /etc/docker/ca.pem: No such file or directory

Third time was the charm:

21:35 $ minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
21:36 $ minikube start --vm-driver=xhyve --iso-url=http://storage.googleapis.com/minikube/iso/buildroot/minikube-v0.0.6.iso
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
21:37 $ cat ~/.minikube/machines/minikube/config.json | grep DriverName
    "DriverName": "xhyve",
21:37 $ minikube ssh
$ mount | grep Users
host on /Users type 9p (rw,relatime,sync,dirsync,version=9p2000,trans=virtio,uname=/redacted/,dfltuid=1000,dfltgid=50,access=any)

I then deployed my application using helm, the goal of which was the following deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: portal
  labels:
    app: portal
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: portal
    spec:
      containers:
      - name: portal
        image: /redacted/
        command:
        - /usr/local/bin/http-server
        - -p 8080
        - .
        ports:
        - containerPort: 8080
        volumeMounts:
          - mountPath: /usr/src/app
            name: source-volume
      volumes:
      - name: source-volume
        hostPath:
          path: /Users/redacted/src/app

That is, I want it to use a hostPath volume mounted into my container from the VM (helm templating makes sure I get the user's homedir correct).

On my image I have inotify-tools installed. In one console I run the following:

root@portal-2725875819-98zs4:/usr/src/app# inotifywait -m .
Setting up watches.
Watches established.

In another console, I run:

22:03 $ kubectl exec -it portal-2725875819-98zs4 /bin/bash
root@portal-2725875819-98zs4:/usr/src/app# touch index.html

In the original console, I see:

./ OPEN index.html
./ ATTRIB index.html
./ CLOSE_WRITE,CLOSE index.html

Back in the original console, in the native directory mounted into the container inside the VM, I run:

root@portal-2725875819-98zs4:/usr/src/app# exit
22:06 $ touch index.html

Unfortunately, I see no further output in the inotify console.

What you expected to happen:

I expected to see inotify events propagated through to the inner container.

How to reproduce it (as minimally and precisely as possible):

I'll see if I can come up with something more concise.

Anything else do we need to know:

I'm not entirely familiar with the arrangements of the minikube ISO, but I did find this file:

https://github.com/kubernetes/minikube/blob/master/deploy/iso/minikube-iso/board/coreos/minikube/linux-4.7_defconfig

and I also found this info:

https://cateee.net/lkddb/web-lkddb/INOTIFY.html
and
https://cateee.net/lkddb/web-lkddb/INOTIFY_USER.html

I'm wondering if it could be as easy as setting those values to =y in that file and rebuilding the ISO. Lemme know if you need a guinea pig.

Also, inotify is not working with the xhyve + boot2docker iso combination, either.

Thanks @r2d4 for help in the minikube slack.

cxhyve kinbug lifecyclrotten

Most helpful comment

Was there any update / resolution on this? I'm running into the same problem on minikube v0.15.0

All 6 comments

Hrm. Probably an issue with how /Users is being mounted into the VM. If I minikube ssh into the VM and then cd /Users/me/src/... and run touch index.html I see inotify events propagate into the pod.

Was there any update / resolution on this? I'm running into the same problem on minikube v0.15.0

Me too on minikube 14/virtualbox.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings