Is this a BUG REPORT or FEATURE REQUEST? (choose one):BUG REPORT
Please provide the following details:
Environment:
Minikube version (use minikube version): minikube version: v0.27.0
cat ~/.minikube/machines/minikube/config.json | grep DriverName):virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):minikube-v0.26.0.isominikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver":
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json
What happened:
Started minikube as
minikube start --extra-config=apiserver.authorization-mode=RBAC --extra-config=apiserver.Authentication.OIDC.IssuerURL=http://192.168.99.1:5556/dex --extra-config=apiserver.Authentication.OIDC.UsernameClaim=email --extra-config=apiserver.Authentication.OIDC.ClientID="praveend_client_id"
Initially tried
minikube start --extra-config=apiserver.Authorization.Mode=RBAC --extra-config=apiserver.Authentication.OIDC.IssuerURL=http://192.168.99.1:5556/dex --extra-config=apiserver.Authentication.OIDC.UsernameClaim=email --extra-config=apiserver.Authentication.OIDC.ClientID="praveend_client_id"
Execution of Above command hangs at "Starting cluster components..."
What you expected to happen:
Should not hang or throw error message.
How to reproduce it (as minimally and precisely as possible):
Execute commands mentioned above
Output of minikube logs (if applicable):
Seeing lot of API server connection errors
May 29 11:22:29 minikube kubelet[3216]: I0529 11:22:29.614039 3216 kubelet.go:297] Watching apiserver
May 29 11:22:29 minikube kubelet[3216]: E0529 11:22:29.623338 3216 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 29 11:22:29 minikube kubelet[3216]: E0529 11:22:29.624097 3216 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 29 11:22:29 minikube kubelet[3216]: E0529 11:22:29.625220 3216 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 29 11:22:29 minikube kubelet[3216]: W0529 11:22:29.626754 3216 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 29 11:22:29 minikube kubelet[3216]: I0529 11:22:29.626936 3216 kubelet.go:556] Hairpin mode set to "hairpin-veth"
May 29 11:22:29 minikube kubelet[3216]: I0529 11:22:29.628151 3216 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 29 11:22:29 minikube kubelet[3216]: I0529 11:22:29.628275 3216 client.go:104] Start docker client with request timeout=2m0s
May 29 11:22:29 minikube kubelet[3216]: W0529 11:22:29.632461 3216 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
May 29 11:22:29 minikube kubelet[3216]: I0529 11:22:29.633371 3216 docker_service.go:244] Docker cri networking managed by kubernetes.io/no-op
May 29 11:22:29 minikube kubelet[3216]: I0529 11:22:29.640534 3216 docker_service.go:249] Docker Info: &{ID:ST4P:FUPV:N2DK:DAQL:B2KY:RX6N:VHER:2NX2:BYOY:LISN:FRNR:JOSR Containers:34 ContainersRunning:0 ContainersPaused:0 ContainersStopped:34 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:34 SystemTime:2018-05-29T11:22:29.63713772Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.64 OperatingSystem:Buildroot 2017.11 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc42024fa40 NCPU:2 MemTotal:2097205248 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=virtualbox] ExperimentalBuild:false ServerVersion:17.12.1-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:
May 29 11:22:29 minikube kubelet[3216]: I0529 11:22:29.642257 3216 docker_service.go:262] Setting cgroupDriver to cgroupfs
May 29 11:22:29 minikube kubelet[3216]: I0529 11:22:29.678564 3216 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
May 29 11:22:29 minikube kubelet[3216]: I0529 11:22:29.696587 3216 kuberuntime_manager.go:186] Container runtime docker initialized, version: 17.12.1-ce, apiVersion: 1.35.0
May 29 11:22:29 minikube kubelet[3216]: W0529 11:22:29.696733 3216 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Anything else do we need to know:
I am testing authentication using dex for API Server and dashboard(incase someone is curious why I am starting minikube with so many parameters).
./bin/dex serve examples/config-prav-dev.yaml
----Logs----
time="2018-05-29T10:21:52Z" level=info msg="config issuer: http://192.168.99.1:5556/dex"
time="2018-05-29T10:21:52Z" level=info msg="config storage: sqlite3"
time="2018-05-29T10:21:52Z" level=info msg="config static client: example-app"
time="2018-05-29T10:21:52Z" level=info msg="config connector: mock"
time="2018-05-29T10:21:52Z" level=info msg="config connector: local passwords enabled"
time="2018-05-29T10:21:52Z" level=info msg="listening (http/telemetry) on 0.0.0.0:5558"
time="2018-05-29T10:21:52Z" level=info msg="listening (http) on 0.0.0.0:5556"
Minikube Version
0.28.0
OS:
System Version: macOS 10.13.5 (17F77)
Kernel Version: Darwin 17.6.0
VM driver:
"DriverName": "virtualbox",
ISO version
"Boot2DockerURL": "~/.minikube/cache/iso/minikube-v0.28.0.iso",
What Happened
I am experiencing the same issue while trying to run the "start-test-environment.sh" script in my kubeless fork. It runs the command:
minikube start --extra-config=apiserver.Authorization.Mode=RBAC --insecure-registry 0.0.0.0/0
During a normal startup, without the extra config the last lines of the .INFO log file are:
I0617 15:51:57.011842 52719 kubernetes.go:119] error getting Pods with label selector "k8s-app=kube-proxy" [Get https://192.168.99.109:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy: net/http: TLS handshake timeout]
I0617 15:51:57.682237 52719 kubernetes.go:124] Found 1 Pods for label selector k8s-app=kube-proxy
But it looks like when it starts and hangs it looks like the kube-proxy isn't coming up, because it just repeats and times out.
I0617 15:58:27.478452 52829 kubernetes.go:119] error getting Pods with label selector "k8s-app=kube-proxy" [Get https://192.168.99.107:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy: dial tcp 192.168.99.107:8443: getsockopt: connection refused]
E0617 16:04:16.490370 52829 start.go:299] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
What you expected to happen
Minikube to start normally.
How to Reproduce it
Run the script in this section:
https://github.com/kubeless/kubeless/blob/master/docs/dev-guide.md#bootstrapping-your-local-dev-environment
More Digging, and it looks like the pods being started on the minikube VM still point to paused?
I looked at journalctl and found:
I0617 21:29:57.129393 2702 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
W0617 21:29:57.149156 2702 pod_container_deletor.go:77] Container "3d23c0ab3943420e1ef06d74986b5593304f4164814dd26eb311556f544dcc2c" not found in pod's containers
I0617 21:29:57.149350 2702 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
W0617 21:29:57.164371 2702 pod_container_deletor.go:77] Container "d76b24fd26ff51dcef2b031207afe310300eb90e1d36b66fcd0ed652c6032a1f" not found in pod's containers
I0617 21:29:57.215868 2702 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
W0617 21:29:57.228672 2702 pod_container_deletor.go:77] Container "a6b3f735592b64294dfef8ba7ea3bae8b6e7f1db5365f5e8a16cd74fc59026be" not found in pod's containers
I0617 21:29:57.230941 2702 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
W0617 21:29:57.234921 2702 pod_container_deletor.go:77] Container "f99109381a091ba07d0dcfe595c109aa6408c772caf052de113ea472321c206c" not found in pod's containers
I0617 21:29:57.285917 2702 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
W0617 21:29:57.294838 2702 pod_container_deletor.go:77] Container "b72fc405ac79f6cbedd864b8944e6e873f7e32aca5770cba83bdf37524a340d1" not found in pod's containers
And after checking with docker:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
464a2904c474 k8s.gcr.io/etcd-amd64 "etcd --peer-key-fil…" 25 minutes ago Up 25 minutes k8s_etcd_etcd-minikube_kube-system_781e4d3870902862f00fdc9781399590_0
2f27273ec362 k8s.gcr.io/kube-scheduler-amd64 "kube-scheduler --ad…" 26 minutes ago Up 26 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
af8bc0dddfb7 k8s.gcr.io/kube-controller-manager-amd64 "kube-controller-man…" 26 minutes ago Up 26 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_68c71b430ddcd3ae7a2bc7138b01d680_0
7819988a16f4 k8s.gcr.io/kube-addon-manager "/opt/kube-addons.sh" 26 minutes ago Up 26 minutes k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
a6b3f735592b k8s.gcr.io/pause-amd64:3.1 "/pause" 26 minutes ago Up 26 minutes k8s_POD_etcd-minikube_kube-system_781e4d3870902862f00fdc9781399590_0
f99109381a09 k8s.gcr.io/pause-amd64:3.1 "/pause" 26 minutes ago Up 26 minutes k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
b72fc405ac79 k8s.gcr.io/pause-amd64:3.1 "/pause" 26 minutes ago Up 26 minutes k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
3d23c0ab3943 k8s.gcr.io/pause-amd64:3.1 "/pause" 26 minutes ago Up 26 minutes k8s_POD_kube-controller-manager-minikube_kube-system_68c71b430ddcd3ae7a2bc7138b01d680_0
d76b24fd26ff k8s.gcr.io/pause-amd64:3.1 "/pause" 26 minutes ago Up 26 minutes k8s_POD_kube-apiserver-minikube_kube-system_78ec8e6742d853539d87e347858cc731_0
SO back journalctl and I believe I see docker message for those containers.
Jun 17 21:29:56 minikube dockerd[2370]: time="2018-06-17T21:29:56Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/d76b24fd26ff51dcef2b031207afe310300eb90e1d36b66fcd0ed652c6032a1f/shim.sock" debug=false module="containerd/tasks" pid=2795
Jun 17 21:29:56 minikube dockerd[2370]: time="2018-06-17T21:29:56Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/3d23c0ab3943420e1ef06d74986b5593304f4164814dd26eb311556f544dcc2c/shim.sock" debug=false module="containerd/tasks" pid=2796
Jun 17 21:29:56 minikube dockerd[2370]: time="2018-06-17T21:29:56Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/b72fc405ac79f6cbedd864b8944e6e873f7e32aca5770cba83bdf37524a340d1/shim.sock" debug=false module="containerd/tasks" pid=2817
Jun 17 21:29:56 minikube dockerd[2370]: time="2018-06-17T21:29:56Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/a6b3f735592b64294dfef8ba7ea3bae8b6e7f1db5365f5e8a16cd74fc59026be/shim.sock" debug=false module="containerd/tasks" pid=2842
Jun 17 21:29:56 minikube dockerd[2370]: time="2018-06-17T21:29:56Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/f99109381a091ba07d0dcfe595c109aa6408c772caf052de113ea472321c206c/shim.sock" debug=false module="containerd/tasks" pid=2848
Jun 17 21:30:09 minikube dockerd[2370]: time="2018-06-17T21:30:09Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/6ccc2198802f2430e203bfdb84430bc9c2702c11edfe1705b420ab686d2062bd/shim.sock" debug=false module="containerd/tasks" pid=2999
I'm just going to keep commenting as I find interesting things since I can reproduce this fairly reliably.
I found it after looking into the logs for the kube controller manager
{"log":"Error: unknown flag: --Authorization.Mode\n","stream":"stderr","time":"2018-06-17T22:16:35.134161966Z"}
And I was able to make it work by editing the script to:
minikube start --extra-config=apiserver.authorization-mode=RBAC --insecure-registry 0.0.0.0/
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Any news on this? :crying_cat_face:
This is just a documentation error and change; setting the option
minikube start [...] --extra-config=apiserver.authorization-mode=RBAC
is passing the correct arguments to the apiserver, however I'm getting it now started with a duplicate entry for authorization mode.
This is how the apiserver is starting after specifying the authorization mode once (as above):
root 3634 3619 45 11:01 ? 00:00:14 kube-apiserver --oidc-username-claim=email --[...] --authorization-mode=RBAC --oidc-client-id=kubernetes --oidc-groups-claim=user-realm-role [...] --advertise-address=192.168.99.100 --authorization-mode=Node,RBAC [...]
You can verify this by logging into minikube (minikube ssh), then if you specified an incorrect option (--extra-config=apiserver.Authorization-..) you should have a stopped container under docker ps -a, which you can review why it's stopped with docker logs <name>
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Closing in preference to more general issue of revealing errors when invalid flag combinations are supplied to the apiserver: #3532
You can see the flag error by running:
minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-api --format={{.ID}})' | egrep -i 'error:|fatal:'
In case anyone gets to this issue looking for the right OIDC settings to pass to the minikube API server, these are:
minikube start \
--extra-config=apiserver.authorization-mode=RBAC \
--extra-config=apiserver.oidc-issuer-url=https://example.com \
--extra-config=apiserver.oidc-username-claim=email \
--extra-config=apiserver.oidc-client-id=kubernetes-local
Basically, there is no need to use caps and pass Authentication. anymore. I'll look into a PR and add this to the docs as these settings are a bit hard to find.
Edit: PR has been merged and this has now been added to the docs
Most helpful comment
This is just a documentation error and change; setting the option
minikube start [...] --extra-config=apiserver.authorization-mode=RBACis passing the correct arguments to the apiserver, however I'm getting it now started with a duplicate entry for authorization mode.
This is how the apiserver is starting after specifying the authorization mode once (as above):
root 3634 3619 45 11:01 ? 00:00:14 kube-apiserver --oidc-username-claim=email --[...] --authorization-mode=RBAC --oidc-client-id=kubernetes --oidc-groups-claim=user-realm-role [...] --advertise-address=192.168.99.100 --authorization-mode=Node,RBAC [...]You can verify this by logging into minikube (
minikube ssh), then if you specified an incorrect option (--extra-config=apiserver.Authorization-..) you should have a stopped container underdocker ps -a, which you can review why it's stopped withdocker logs <name>