Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
minikube start --vm-driver=none --kubernetes-version=v1.8.0 --logtostderr failed to start on travis ci. For more info: https://travis-ci.org/kubernetes/kube-state-metrics/jobs/365005727The same thing is happening on CircleCI.
It seems to be due to minikube expecting to find systemctl, which is not there on the CircleCi Ubuntu Trusty box at least (and I suspect on the Travis boxes either).
I think you'll need to pass --bootstrapper=localkube to work with the none driver in environments that don't have system for now.
@dlorenc Is this the recommended way to work around systemd or this is a regression bug?
It's a bit of a regression caused by the switch from localkube to kubeadm by default. Kubeadm with the none driver requires systemd.
The other problem here is that travis only supports up to Ubuntu 14.04, and systemd was added in 16.04.
/cc @r2d4
/cc @aaron-prindle
@dlorenc seems we force use --bootstrapper=localkube is fine and it is ok even travis later bump Ubuntu from 14.04 to 16.04 if we have no strong dependency on kubeadm features. Or should minikube fix the compatibility problem.
We are also experiencing this.
It sounds like travis will need some time to provide an image with systemd.
Thus I wonder if minikube could at least do a check if systemd is present, to output a warning if the kubeadm bootstrapper is used.
This would at least highlight the problem.
I am not sure what the right fix is tho (wait for travis update or fix it in minikube).
@dlorenc @r2d4 @aaron-prindle
Ping @dlorenc @r2d4 @aaron-prindle Any suggestions? Pinning on an old version of minikube is not so good from a long run perspective.
Friendly ping.~
You do know they are in a different timezone?
Ping @dlorenc @r2d4 @aaron-prindle
The current options are to use an older version of minikube, force it to use the localkube bootstrapper or run it in an environment with systemd. Localkube is going to be deleted before the next Kubernetes release, so finding an environment with systemd will be the best option going forward.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
To repeat what I said in #3174:
I agree that this situation sucks, but it's also not yet a top priority for us. If you end up finding a workable solution, I would love to hear about it and would happily review a patch for the issue.
localkube gone https://github.com/kubernetes/minikube/pull/2911
I was able to get it running by running minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver=systemd
CentOS 7 machine running on OTC
Ubuntu 16.04 is now the default ubuntu distribution version: https://blog.travis-ci.com/2019-04-15-xenial-default-build-environment
/close
@andyxning: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@sharifelgamal: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The switch suggested above did not work for me on minikube 1.4.0.
As a quick fix if you're not running systemd,
sudo touch /usr/bin/systemctl
sudo chmod a+r /usr/bin/systemctl
It's just an empty script, which only works because it returns a non-zero exit code, effectively bypassing the attempt to start dockerd. You'll need to manually start dockerd first.
I gave up on what I was trying when I ran into more issues but it got me past that hurdle
this is still a feature that havent been implemented
not same but related in a way https://github.com/kubernetes/minikube/issues/4172
related https://github.com/kubernetes/minikube/issues/6954
we still need to do this ! anyone interested ?
We need to pass --bootstrapper=localkube to work with the none driver in environments to start minikube like minikube start --vm-driver=none --bootstrapper=localkube --kubernetes-version=v1.10.0 . While doing this it should be kept in mind that we are using the older version of minikube which comes before v0.26+ and kubernetes version which support --bootstrapper=localkube .
Support for localkube and v1.10.0 has been suspended (in 2018), suggest looking at k3s if you want something similar for modern kubernetes versions. Minikube now uses kubeadm and v1.18.0.
We fixed this by adding support for OpenRC.
Most helpful comment
related https://github.com/kubernetes/minikube/issues/6954
we still need to do this ! anyone interested ?