Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST
Please provide the following details:
Environment:
Minikube version (use minikube version
): v0.25.0
cat ~/.minikube/machines/minikube/config.json | grep DriverName
): nonecat ~/.minikube/machines/minikube/config.json | grep -i ISO
or minikube ssh cat /etc/VERSION
): n/aWhat happened: I spent a long time struggling to get things working properly with vm-driver=none
What you expected to happen: Find documentation which provides coverage for this very valuable use-case.
Anything else do we need to know:
I have a linux laptop that I want to use k8s on. I do not want to run a VM, since that is an inefficient use of my laptop's memory (always either too much or too little memory allocated for the given docker containers I want to run) and slower due to the extra virtualization. Also it shouldn't be necessary - I have a Linux environment, why should I have to run another VM just to run Linux? That's why I bought this laptop.
The use of vm-driver=none is hardly documented at all. What little there is does not seem to consider the value for developer machines.
Yesterday I filed https://github.com/kubernetes/minikube/issues/2571, but this was actually not the full story. I had a red-herring with IP addresses which led me to believe that the docker0 interface was the root of the problem. It turns out, the real problem was whatever IP address I happened to have bound to my ethernet interface would be used in the construction of the cluster. If that IP address changed for any reason (as often happens on laptops) then the whole environment was inaccessible.
The workaround was not to specify a bridge IP for docker, as I had thought. Instead you need to start minikube like so:
minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost
And then go and edit ~/.kube/config, replacing the server IP that was detected from the main network interface with "localhost". For example, mine now looks like this:
- cluster:
certificate-authority: /home/jfeasel/.minikube/ca.crt
server: https://localhost:8443
name: minikube
With this configuration, I can access my local cluster all of the time, even if the main network interface is disabled.
Also, we should note that it is required to have "socat" installed on the Linux environment. See this issue for details: https://github.com/kubernetes/kubernetes/issues/19765 I saw this when I tried to use helm to connect to my local cluster; I got errors with port-forwarding. Since I'm using Ubuntu all I had to do was sudo apt-get install socat
and then everything worked as expected.
@jakefeasel Kinda unrelated but do you know how to let minikube pull images I created locally? All the docs mention using minikube docker-env but it returns none in case of using docker driver.
@harpratap see #2443, your locally available images are also available to kubernetes with vm-driver=none.
@ncabatoff Do I need to specify some additional parameter when starting minkube to let it access those locally created images? I get this error -
Failed to pull image "mylocalnginx": rpc error: code = Unknown desc = Error response from daemon: pull access denied for mylocalnginx, repository does not exist or may require 'docker login'
Not that I'm aware of. Note that you don't even have to pull them: with vm-driver=none it's the same docker daemon as on your desktop, i.e. the one you've been populating when you build images. So just skip the pull you're doing now, reference the images normally in your container manifests, and it should just work. At least that's been my experience.
@ncabatoff Yes that is what I did and I got the image pull error, didn't do any manual pulling of images.
Edit: Nevermind, got it working by setting imagePullPolicy to Never. Thanks!
Edit Again: Don't change the imagePullPolicy, you just need to tag your local images with a version other than latest, so I just built my image mylocalnginx:v1
and it worked.
Seems like the right place to add a couple of items that I've discovered along the way with vm-driver=none:
1) minikube mount not supported
2) minikube ssl not supported and doesn't seem to be any way to ssh to the minikube "node"?? This is particularly a problem when using something like Terraform file provisioner to move content to the node to share with pods. However, see item 3 below.
3) mounting a host volume to a pod mounts it from the underlying host, not the "node" that is(?) minikube. Maybe put another way is that the host behind minikube is also in effect the node in some respects, at least when the vm-driver is none.
Also having issues similar to https://brokenco.de/2018/09/04/minikube-vmdriver-none.html
Hey y'all. I'd appreciate your feedback on the pull request for an initial set of documentation for --vm-driver=none: #3240
Preview link: https://github.com/kubernetes/minikube/blob/22afb79a37436b3d98171dd09212f193fb6f45ca/docs/vmdriver-none.md
Thanks!
@jakefeasel, in your issue description could you fix the typo in
minikube start--vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost
by adding a space between start
and --vm-driver=none
? Currently copy-pasting the command line will output unknown command "start--vm-driver=none"
.
Also, I'm getting an error creating file at /usr/bin/kubeadm
. Should I run the command as root?
See also:
vm-driver=none
– What I learned today – 6 August 2018 from Niel de Wet--vm-driver=none
vulnerable to CSRF? from Niel de Wet on Stack Overflowlocalkube
binary.@tstromberg can we standardize on either None
or none
?
@cdancy - done. Now with lowercase.
@tstromberg LGTM
It appears my whole premise was flawed, based on the content of this PR. However, it does strike me that a lot of people are similarly misguided, given the interest in this issue and the various blog posts and comments from people trying something similar. Is there any interest within the minikube team to intentionally support this use-case?
I'm also curious. Operating from a Linux VDI, I'm unable to run another layer of virtualization necessary to run a nested virtual machine. The CI/CD instructions have delivered me to the point that the cluster is running in Docker within the VDI. Some pods are running on the default bridge network, others are running on the default host network. This, I'm sure is the root of my dashboard and DNS issues since those pods are running on the bridge, while the remaining pods are running on the host.
Now that I've discovered @jakefeasel was the author of the note to specify a bridge network, I'd like to ask for some clarification on the note's meaning. Do I need to create a new bridge network for minikube? Are your start options above doing what your note intended with the note?
@edthorne the note I added back then was probably wrong. I wouldn't put much stock into it. Instead, I would refer to @tstromberg 's PR.
I wrote a quickstart guide for running Minikube on Linux with --vm-driver=none
in MD format. I'd be happy to share it back to the Minikube project and/or reformat it according to your project needs.
The current version is here but obviously I'd remove the section on installing Gestalt before contributing it.
@sbernheim, I tried your quickstart guide and made a few observations which could be addressed in the guide:
/etc/kubernetes/
You should make sure you don't have an existing /etc/kubernetes/
directory. I did, and minikube start
would fail complaining about existing .conf
files with wrong CA certs.
.kube/
The location and permissions of the .kube/
directory became more friendly if I did:
CHANGE_MINIKUBE_NONE_USER=true sudo -E minikube start --vm-driver=none
For newbies, it would be helpful to show the expected output for sudo minikube status
.
Whenever my laptop IP changes, minikube stops working, and to get it up again, I need to
rm -rf /etc/kubernetes
Is there a more light-weight method for adapting minikube to the IP change?
If I run minikube with --vm-driver=none
, external DNS resolution doesn't work inside any containers in any of my pods. Is that normal?
external DNS resolution works fine for me with --vm-driver=none
On numerous occasions have had to download a package or pull some utility to do some diagnosis within the pod and had no problem getting to the external world.
@bennettellis Did it work out of the box? or you had to fiddle with the networking configurations? I used this to bootstrap:
```
sudo minikube start --extra-config=apiserver.service-node-port-range=80-32767 --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost
For me, out of the box. I've only done this inside AWS Ubuntu 18 LTS ec2 instances as well as with a VirtualBox instance running that same Ubuntu 18 LTS OS . So out of the box is with those specific boxes and pretty much the default networking there (other than restricting ingress).
I tried this locally on a laptop. Anyone else facing the issue?
@slayerjain the specific commands I used to install k8s and minikube:
```#!/bin/bash
apt-get -y update
apt-get -y upgrade
apt-get install -y docker.io unzip
service docker start
systemctl enable docker
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && mv minikube /usr/local/bin/
once that was done, started up minikube with:
sudo minikube start --vm-driver=none --memory=8192
```
I need to pass this flag https://github.com/kubernetes/kubeadm/issues/845 to kubelet related to systemd reslovconf. If not coredns pod crash.
@bhack Thanks! How do you pass the flag through minikube?
@slayerjain OS detais for the host in question would help here.
also, this should probably be a new issue, since this is about documentation.
@slayerjain Yes the upstream flag was sent with Minikube. Currently Debian and Ubuntu with systemd resolvconf but probably also other Linux distro with systemd resolvconf.
I'm on Pop OS (Ubuntu 18.04 based). @bhack Do you mean something like this?
```
sudo minikube start --extra-config=apiserver.service-node-port-range=80-32767 --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost --kubernetes-version=v1.11.3
Yes
using this minikube doesn't really start on my system. I get this
```
sudo minikube start --extra-config=apiserver.service-node-port-range=80-32767 --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost --kubernetes-version=v1.11.3
[sudo] password for shubhamjain:
Starting local Kubernetes v1.11.3 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E1017 00:22:24.988553 12748 start.go:297] Error starting cluster: kubeadm init error
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
running command: : running command:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
output: [init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
I1017 00:22:24.799394 12879 kernel_validator.go:81] Validating kernel version
I1017 00:22:24.799495 12879 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube" lookup minikube on 127.0.0.53:53: server misbehaving
[WARNING Port-10250]: Port 10250 is in use
[preflight] Some fatal errors occurred:
[ERROR FileExisting-crictl]: crictl not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
: running command:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
.: exit status 2
Do you have /run/systemd/resolve/resolv.conf
on the host?
on my laptop (host) - yes.
I think your problem is unrelated to that flag. Check https://github.com/kubernetes/minikube/issues/3150
@jakefeasel can't agree any more! It is painful to look up in Doc for vm-driver=none
I am comparing minikube start --vm-driver=none
with microk8s. Does any suggestion for me?
@slayerjain Also check https://github.com/kubernetes/minikube/issues/2707
Initial doc has been submitted: please send further PR's to improve this doc with any tips or tricks you might know of:
https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md
Thanks!
Interesting. Btw, I just noticed that Minikube starts by default on boot. Is there any way to disable that?
@akaihola - Thanks for the feedback! I've added a message to the top of the Installing Minikube on Linux guide referencing @tstromberg 's cautionary MD file to indicate that developers should not use the --vm-driver=none
option on a laptop or PC running Linux, a section on checking for existing minikube and kubectl configuration files and binaries, and expected output for the minikube status
command, as you suggested.
I'll need to test out that CHANGE_MINIKUBE_NONE_USER
option to understand what it does, but if it makes sense to incorporate that into the guide I'll probably add it later on. It doesn't seem strictly necessary, but could make installation and management a but easier for the user in the long-run.
I'm not sure how you might adapt to an IP address change while running Minikube, but maybe try the minikube update-context
command and see if that helps?
In my case, I'm running Linux within a VM rather than directly on my laptop, so the IP address doesn't tend to change while the VM is running. But it does change whenever I stop/start/restart the VM, and that command will reconnect local kubectl
with the minikube cluster.
@harpratap can you tell where did you set the imagePullPolicy to Never?
Eliminating the VM is great if you are on a Linux box. At this time, I am doing it as you suggest. However, I just stepped across microk8s which seems to target exactly that niche.
Wonder how it compares to the minikube based approach outlined here.
I don't think microk8s can work with kubectl contexts cause it is inside snap. So you need to use a specialized command instead of kubectl.
It think it is a little bit hard to use pure kubectl on microk8s and other kubernetes instances at the same time.
@deas & @bhack, note that microk8s can be tricky with SELinux:
https://github.com/ubuntu/microk8s/issues/135#issuecomment-430194148
Most helpful comment
@sbernheim, I tried your quickstart guide and made a few observations which could be addressed in the guide:
Existing
/etc/kubernetes/
You should make sure you don't have an existing
/etc/kubernetes/
directory. I did, andminikube start
would fail complaining about existing.conf
files with wrong CA certs.User owned
.kube/
The location and permissions of the
.kube/
directory became more friendly if I did:Minikube status output
For newbies, it would be helpful to show the expected output for
sudo minikube status
.Host IP changes
Whenever my laptop IP changes, minikube stops working, and to get it up again, I need to
rm -rf /etc/kubernetes
Is there a more light-weight method for adapting minikube to the IP change?