For example, if you specify -extra-config kubelet.kube-api-qps=5 --extra-config controller-manager.kube-api-qps=5, and run sudo ps -afe | grep controller, you'll see that the new arguments do not appear.
This is especially painful when wanting to integrate OpenID Connect. If you start with a clean cluster, you have to install an OpenID Connect impl. (like Dex) and then you have to restart with the --extra-config settings that define the oidc configs (the ones here). But because of this bug, it won't work. You have to start the cluster with the oidc settings (but this means you are setting them before you even have the OpenId Connect stuff in place). And note that since Dex is being installed directly in minikube and is the issuer, then one of the configs requires a hostname to the minikube IP (oidc-issuer-url), but I can't determine that until minikube starts up and I get its ip via minikube ip - but by then it is too late (I cannot then shutdown and restart minikube with the extra config containing the URL with the IP because this bug won't allow that change to take effect).
I'm seeing this in minikube 1.11.0.
Is this even working for new clusters? I installed a fresh minikube 1.11.0 on OSX, created a new cluster with this command:
minikube --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 start
and the bind-adress for controller-manager is unaffected:
> kc describe -n kube-system pod kube-controller-manager-minikube
...
Command:
kube-controller-manager
--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
--bind-address=127.0.0.1
...
As a side note figuring out controller-manager.address from the documentation is not trivial (and maybe that is what I got wrong?), the reference APIs mentioned in the section about extra-config have the Address attribute with a capital A, on a sub-element of the struct (Generic GenericControllerManagerConfiguration).
Is this even working for new clusters? I installed a fresh minikube 1.11.0 on OSX, created a new cluster with this command:
minikube --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 startand the bind-adress for controller-manager is unaffected:
> kc describe -n kube-system pod kube-controller-manager-minikube
... Command: kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 ...As a side note figuring out controller-manager.address from the documentation is not trivial (and maybe that is what I got wrong?), the reference APIs mentioned in the section about extra-config have the Address attribute with a capital A, on a sub-element of the struct (Generic GenericControllerManagerConfiguration).
do you mind sharing what driver you were using ? currently minikube we do not support remote access to minikube. ( I noticed you were trying to pass 0.0.0.0)
in VM drivers you might be able to do it at your own risk, but in Docker/podman driver, we explicitly set the listen IP to local so you wont be able to over-ride that.
given that we could still provide a better message to the user that minikube remote is not supported.
also you are we could really use some help on the documenting --
extra-config flag. I would be happy to review any PR that adds docs for this flag.
I am using VirtualBox on OSX.
The "remote" access is simply trying to be able to read cadvisor metrics from Prometheus, as described here:
https://github.com/coreos/prometheus-operator/blob/master/Documentation/troubleshooting.md#prometheus-kubelet-metrics-server-returned-http-status-403-forbidden
Franck
Will like to take this up. Working on it.
Well, I think even for a new cluster, it doesn't work. I tried out the statements mentioned in the issue while creating a new cluster, that too didn't give any mention for the supplied api-server and controller-manager flags.
_If there's missing data, please let me know and I'll update the issue._
As requested by @priyawadhwa in https://github.com/kubernetes/minikube/issues/8979, I'm showing here everything I could gather from my situation related to this problem. I always seem to hit this problem even after a clean install of this environment.
Environment:
Debian 10.5, installed via netinst ISO inside a VirtualBox VM.
root@minikube:~# uname -a
Linux minikube 4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24) x86_64 GNU/Linux
root@minikube:~# env # Some values have been removed as they're not relevant for this issue
SHELL=/bin/bash
PWD=/root
XDG_SESSION_TYPE=tty
HOME=/root
LANG=es_ES.UTF-8
TERM=xterm-256color
SHLVL=2
PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
SSH_TTY=/dev/pts/1
_=/usr/bin/env
root@minikube:~# cat install_script.sh # How I installed everything
# Install Docker (latest version)
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io
# Install kubectl (latest version, no hypervisor)
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubectl
# Install Minikube (latest version)
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
cp minikube /usr/local/bin/
rm minikube
Docker, Minikube and kubectl.
root@minikube:~# docker version
Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:45:50 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:44:21 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
root@minikube:~# minikube version
minikube version: v1.12.2
commit: be7c19d391302656d27f1f213657d925c4e1cfc2-dirty
root@minikube:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Steps to reproduce the issue:
--vm-driver=none and (through --extra-config) kubeadm init --skip-phases=addon/kube-proxy as described in official docs. This works as expected for the first time. Output is saved as minikube-start-ok.txt.root@minikube:~# minikube start --vm-driver=none --extra-config=kubeadm.skip-phases=addon/kube-proxy --alsologtostderr
kube-proxy hasn't been created. This is also expected.root@minikube:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-24knd 0/1 Running 0 3m22s
kube-system etcd-minikube 1/1 Running 0 3m21s
kube-system kube-apiserver-minikube 1/1 Running 0 3m21s
kube-system kube-controller-manager-minikube 1/1 Running 0 3m21s
kube-system kube-scheduler-minikube 1/1 Running 0 3m21s
kube-system storage-provisioner 0/1 CrashLoopBackOff 3 3m28s
minikube logs and save that output as minikube-logs-ok.txt.root@minikube:~# minikube logs
root@minikube:~# minikube stop
✋ Stopping node "minikube" ...
🛑 1 nodes stopped.
kube-proxy is created. Output is saved as minikube-start-failed.txt.root@minikube:~# minikube start --vm-driver=none --extra-config=kubeadm.skip-phases=addon/kube-proxy --alsologtostderr
kube-proxy shouldn't have been created, as expected in step 2.root@minikube:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-24knd 1/1 Running 1 18m
kube-system etcd-minikube 1/1 Running 1 18m
kube-system kube-apiserver-minikube 1/1 Running 1 18m
kube-system kube-controller-manager-minikube 1/1 Running 1 18m
kube-system kube-proxy-tbj29 1/1 Running 0 2m22s
kube-system kube-scheduler-minikube 1/1 Running 1 18m
kube-system storage-provisioner 1/1 Running 8 19m
minikube logs and save that output as minikube-logs-failed.txt.root@minikube:~# minikube logs
More details:
rm -rf ~/.minikube/cache.minikube delete.Full output of failed command:
Full output of minikube start command used, if not already included:
Optional: Full output of minikube logs command:
If anyone is interested in fixing this, let me know, and I would be happy to help them.
My recommendation on how to get started is:
First check that the new options are being written to $HOME/.minikube/profiles/minikube/config.json. My best guess is that this isn't happening for some reason, perhaps trying to preserve the behavior of the previous configuration without applying the new configuration. The field is called ExtraOptions.
Inspect the output of minikube start --alsologtostderr -v=1 for hints. For instance, there's code in kubeadm that is supposed to determine that the configuration has changed, and reset the cluster
Search the code for any special handling of ExtraOptions - it only has 15 mentions, so it should be easy to see which might be wrong: https://github.com/kubernetes/minikube/search?l=Go&q=ExtraOptions
Hi @tstromberg Thank you for the hints! I quickly checked and as per https://github.com/kubernetes/minikube/commit/bee681559b9e9fd2a587db62ec1b5042f32f02a8#diff-0e864ab4025634664724909a47c34fbcae246ad52307eaaaa58153f0b256a8b4L381, the extra-config is not copied over - is there a specific reason for it?
also, @medyagh
Most helpful comment
_If there's missing data, please let me know and I'll update the issue._
As requested by @priyawadhwa in https://github.com/kubernetes/minikube/issues/8979, I'm showing here everything I could gather from my situation related to this problem. I always seem to hit this problem even after a clean install of this environment.
Environment:
Debian 10.5, installed via netinst ISO inside a VirtualBox VM.
Docker, Minikube and kubectl.
Steps to reproduce the issue:
--vm-driver=noneand (through--extra-config) kubeadm init --skip-phases=addon/kube-proxy as described in official docs. This works as expected for the first time. Output is saved asminikube-start-ok.txt.kube-proxyhasn't been created. This is also expected.minikube logsand save that output asminikube-logs-ok.txt.kube-proxyis created. Output is saved asminikube-start-failed.txt.kube-proxyshouldn't have been created, as expected in step 2.minikube logsand save that output asminikube-logs-failed.txt.More details:
rm -rf ~/.minikube/cache.minikube delete.Full output of failed command:
Full output of
minikube startcommand used, if not already included:Optional: Full output of
minikube logscommand: