Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REQUEST
Minikube version (use minikube version):
minikube version: v0.17.1
Environment:
cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v1.0.6.iso$ docker info
Containers: 89
Running: 26
Paused: 0
Stopped: 63
Images: 150
Server Version: 1.11.1
Storage Driver: overlay
Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.7.2
Operating System: Buildroot 2016.08
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.954 GiB
Name: minikube
ID: YEFP:FYBG:COOP:X6CZ:6UGY:NIVM:KMRY:W2A3:AEHP:XGVL:B4X3:PCEN
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
$ df -h
Filesystem Size Used Available Use% Mounted on
devtmpfs 959.9M 0 959.9M 0% /dev
tmpfs 1000.3M 0 1000.3M 0% /dev/shm
tmpfs 1000.3M 97.0M 903.3M 10% /run
tmpfs 1000.3M 0 1000.3M 0% /sys/fs/cgroup
tmpfs 1000.3M 88.0K 1000.2M 0% /tmp
/dev/sda1 17.8G 12.4G 4.5G 74% /mnt/sda1
/hosthome [edited] [edited] [edited] 76% /hosthome
What happened:
This proposal originates from problems encountered when using minikube during development. We're using minikube to build images and after some time we get errors like this:
Step 2 : ENV DJANGO_SETTINGS_MODULE myproject.settings
---> Running in 1cc83fb36472
mkdir /mnt/sda1/var/lib/docker/overlay/2b72f06eafeeb8f582cd9f25aea28f0e4c07e3b2d4b05f304d4af4fe61207758/tmproot065603283/usr/share/icons/hicolor/32x32/stock/navigation: no space left on device
Disk usage seems ok, so I guess there is a problem with inodes. Deleting unused images from previous build solves the problem for some time.
When investigating, I've found that by default Docker uses overlayfs as storage driver. It has known issues with using too much inodes. It's also noticeably slower than aufs, which works ok on my host system.
One of the solutions to slow builds and extensive inodes usage would be to use aufs, but it requires building a kernel module. A simpler solution would be to use overlayfs2.
https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/
Quoting the docs:
Inode limits. Use of the overlay storage driver can cause excessive inode consumption. This is especially so as the number of images and containers on the Docker host grows. A Docker host with a large number of images and lots of started and stopped containers can quickly run out of inodes. The overlay2 does not have such an issue.
"Overlay vs Overlay2" - https://docs.docker.com/engine/userguide/storagedriver/selectadriver/#overlay-vs-overlay2
Proposal:
overlay2)overlay to overlay2 as storage driverI've seen that you consdered updating Docker to 1.12 (https://github.com/kubernetes/minikube/pull/435), but evetnaully decided not to do so, referencing Kubernetes 1.4. As of now, Kubernetes 1.6 claims to support Docker 1.12.6 (https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v160).
So maybe you could reconsider updating Docker to 1.12?
Then you could switch to overlay2 or at least allow people to switch themselves. Right now, it's not possible to use overlay2 because of old Docker version 1.11.
Seems reasonable
We should also enable AUFS kernel support for those who want to use that driver
http://lists.busybox.net/pipermail/buildroot/2015-February/119409.html
@r2d4
I've just checked that Kubernetes hasn't validated overlay2 yet (https://github.com/kubernetes/kubernetes/issues/32536) so I'm not sure if you'd like to accept it for minikube.
If enabling AUFS support is also an option maybe I should create a separate issue? I think that AUFS might be a noticeable stability and speed boost when compared to overlay, so it's definitely worth investigating.
@jgoclawski Have you evaluated that overlay2 is good and stable for production on a public cloud, say, like AWS Linux?. Just curious to know. Also the evaluation should be done with Docker 1.13.6( Docker CE 17.0.6) this June 2017.
@bklau no, we're using aufs in production. I have used overlay2 for development only - it was stable and with performance similar to aufs.
Anyone having problems with inodes ("no space left on device") and/or slow performance when using many images, feel free to try running Docker 1.12 and overlay2 instead of Docker 1.11 and overlay, the ISO is built from my PR:
minikube start --iso-url https://storage.googleapis.com/minikube-builds/1658/minikube-testing.iso --docker-opt storage-driver=overlay2
@jgoclawski it's a known issue. I think it's fixed with docker 1.13 ( 17.03.x)
We should be able to update this now that kubernetes supports overlay2
@dlorenc
Actually, #1542 took care of it. In this version of Docker, overlay2 is picked as default, instead of overlay.
Output when running newest iso:
$ docker version
Client:
Version: 17.06.0-ce
API version: 1.30
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:15:15 2017
OS/Arch: linux/amd64
Server:
Version: 17.06.0-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:51:55 2017
OS/Arch: linux
$ docker info | grep "Storage Driver"
Storage Driver: overlay2
Should we close this?
SGTM
Hi, Is there support switching to overlay2 from direct-lvm thinpool? The possible scenario would be
docker 1.12.x to docker 1.13.1.
Most helpful comment
Seems reasonable
We should also enable AUFS kernel support for those who want to use that driver
http://lists.busybox.net/pipermail/buildroot/2015-February/119409.html