Minikube: dashboard: Add Node condition check (DiskPressure and pod status checks) before openning

Created on 2 Nov 2019  Â·  32Comments  Â·  Source: kubernetes/minikube

when i run it always hangs forever and sometimes throws 503

garretsidzaka@$$$$$:~$ sudo minikube dashboard

Verifying dashboard health ...
Launching proxy ...
Verifying proxy health ...

^C
garretsidzaka@$$$$$:~$

garretsidzaka@$$$$$$:/usr/bin$ minikube version
minikube version: v1.4.0
commit: 7969c25
ubuntu vm 1804
vm-driver=none

cdashboard cnone-driver help wanted kinfeature prioritimportant-longterm

Most helpful comment

have this issue solved?i got the same problem.

All 32 comments

ping?

@GarretSidzaka
Thank you for sharing your experience! If you don't mind, could you please provide:

  • The exact command-lines used, so that we may replicate the issue
  • The full output of the command that failed
  • The full output of the "minikube logs" command
  • Which operating system version was used

This will help us isolate the problem further. Thank you!

and additionally, I wonder

do you use a corp network or VPN or proxy ?

Bullet one:
sudo minikube start
sudo minikube dashboard

Bullet two:
garretsidzaka@$$$$$:~$ sudo minikube dashboard

Verifying dashboard health ...
Launching proxy ...
Verifying proxy health ...

^C
garretsidzaka@$$$$$:~$

Bullet three:
garretsidzaka@cloudstack:/$ sudo minikube logs
*
X Error getting config: stat /home/garretsidzaka/.minikube/profiles/minikube/config.json: no such file or directory
*

Bullet Four:
Ubuntu 18.04.3

The answer to your last question is no.

@GarretSidzaka would you please share the full output of
sudo minikube start --alsologtostderr -v=8

@GarretSidzaka I am curious when you said it hangs forever, did you mean in terminal it is stuck at this ?

medya@~/workspace/minikube (clean_cron) $ minikube dashboard
🔌  Enabling dashboard ...
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:65504/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

if that is the case, then that is the expected behaviour, minikube will hang there and run a webserver, so you can access the dashboard in your browser

@GarretSidzaka would you please share the full output of
sudo minikube start --alsologtostderr -v=8

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-66-generic x86_64)

6 packages can be updated.
0 updates are security updates.

Last login: Mon Nov 4 23:29:43 2019 from 71.209.166.96
garretsidzaka@cloudstack:~$ sudo minikube start --alsologtostderr -v=8
[sudo] password for garretsidzaka:
I1106 00:34:13.321761 183139 notify.go:125] Checking for updates...
I1106 00:34:13.502222 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/last_update_check" with filemode -rw-r--r--

I1106 00:34:13.503829 183139 start.go:236] hostinfo: {"hostname":"cloudstack","uptime":931694,"bootTime":1572068759,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"4.15.0-66-generic","virtualizationSystem":"","virtualizationRole":"","hostid":"73fa3e87-061f-4111-9c29-9a2074fc4bec"}
I1106 00:34:13.504201 183139 start.go:246] virtualization:
! minikube v1.4.0 on Ubuntu 18.04
I1106 00:34:13.504790 183139 profile.go:66] Saving config to /home/garretsidzaka/.minikube/profiles/minikube/config.json ...
I1106 00:34:13.504883 183139 cache_images.go:295] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I1106 00:34:13.504927 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I1106 00:34:13.504952 183139 cache_images.go:297] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 completed in 74.699µs
I1106 00:34:13.505046 183139 cache_images.go:82] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I1106 00:34:13.505085 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
I1106 00:34:13.505125 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 exists
I1106 00:34:13.505139 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 completed in 66.699µs
I1106 00:34:13.505171 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-proxy:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
I1106 00:34:13.505210 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
I1106 00:34:13.505287 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 exists
I1106 00:34:13.505300 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 completed in 94.698µs
I1106 00:34:13.505363 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-scheduler:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
I1106 00:34:13.505403 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
I1106 00:34:13.505445 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
I1106 00:34:13.505460 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 completed in 60.799µs
I1106 00:34:13.505511 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
I1106 00:34:13.505550 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
I1106 00:34:13.505593 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 exists
I1106 00:34:13.505724 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 completed in 178.797µs
I1106 00:34:13.505797 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-apiserver:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
I1106 00:34:13.505751 183139 cache_images.go:295] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
I1106 00:34:13.505905 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
I1106 00:34:13.505950 183139 cache_images.go:297] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 completed in 204.497µs
I1106 00:34:13.505991 183139 cache_images.go:82] CacheImage k8s.gcr.io/etcd:3.3.15-0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
I1106 00:34:13.505796 183139 cache_images.go:295] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4
I1106 00:34:13.506091 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 exists
I1106 00:34:13.506134 183139 cache_images.go:297] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 completed in 354.195µs
I1106 00:34:13.506175 183139 cache_images.go:82] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 succeeded
I1106 00:34:13.505646 183139 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
I1106 00:34:13.506260 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 exists
I1106 00:34:13.506301 183139 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 completed in 654.591µs
I1106 00:34:13.505666 183139 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1106 00:34:13.506373 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 exists
I1106 00:34:13.506390 183139 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 completed in 729.289µs
I1106 00:34:13.506406 183139 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 succeeded
I1106 00:34:13.505715 183139 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
I1106 00:34:13.505775 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2
I1106 00:34:13.506562 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 exists
I1106 00:34:13.506581 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 completed in 813.489µs
I1106 00:34:13.506597 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-addon-manager:v9.0.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 succeeded
I1106 00:34:13.505785 183139 cache_images.go:295] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
I1106 00:34:13.506642 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
I1106 00:34:13.506656 183139 cache_images.go:297] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 completed in 872.988µs
I1106 00:34:13.506662 183139 cache_images.go:82] CacheImage k8s.gcr.io/coredns:1.6.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
I1106 00:34:13.505622 183139 cache_images.go:295] CacheImage: k8s.gcr.io/pause:3.1 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1
I1106 00:34:13.506421 183139 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 succeeded
I1106 00:34:13.506690 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
I1106 00:34:13.506719 183139 cache_images.go:297] CacheImage: k8s.gcr.io/pause:3.1 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 completed in 1.094384ms
I1106 00:34:13.506734 183139 cache_images.go:82] CacheImage k8s.gcr.io/pause:3.1 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
I1106 00:34:13.506449 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 exists
I1106 00:34:13.506764 183139 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 completed in 1.054385ms
I1106 00:34:13.506773 183139 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 succeeded
I1106 00:34:13.506785 183139 cache_images.go:89] Successfully cached all images.
I1106 00:34:13.518345 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/profiles/minikube/config.json" with filemode -rw-------
I1106 00:34:13.518649 183139 cluster.go:93] Machine does not exist... provisioning new machine
I1106 00:34:13.518671 183139 cluster.go:94] Provisioning machine with config: {KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.4.0.iso Memory:2000 CPUs:2 DiskSize:20000 VMDriver:none ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true}

  • Running on localhost (CPUs=4, Memory=8027MB, Disk=74576MB) ...
  • OS release is Ubuntu 18.04.3 LTS
    I1106 00:34:13.532688 183139 profile.go:66] Saving config to /home/garretsidzaka/.minikube/profiles/minikube/config.json ...
    I1106 00:34:13.532744 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/profiles/minikube/config.json.tmp098087967" with filemode -rw-------
    I1106 00:34:13.532996 183139 exec_runner.go:40] Run: sudo systemctl start docker
    I1106 00:34:13.548182 183139 exec_runner.go:51] Run with output: docker version --format '{{.Server.Version}}'
  • Preparing Kubernetes v1.16.0 on Docker 18.09.7 ...
    I1106 00:34:14.479840 183139 settings.go:124] acquiring lock: {Name:kubeconfigUpdate Clock:{} Delay:10s Timeout:0s Cancel:}
    I1106 00:34:14.480318 183139 settings.go:132] Updating kubeconfig: /home/garretsidzaka/.kube/config
    I1106 00:34:14.493753 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.kube/config" with filemode -rw-------

    • kubelet.resolv-conf=/run/systemd/resolve/resolv.conf

      I1106 00:34:14.494246 183139 cache_images.go:95] LoadImages start: [k8s.gcr.io/kube-proxy:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 kubernetesui/dashboard:v2.0.0-beta4 k8s.gcr.io/kube-addon-manager:v9.0.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1]

      I1106 00:34:14.494385 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1

      I1106 00:34:14.494399 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0

      I1106 00:34:14.494418 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0

      I1106 00:34:14.494455 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0

      I1106 00:34:14.494468 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 -> /var/lib/minikube/images/kube-scheduler_v1.16.0

      I1106 00:34:14.494485 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2

      I1106 00:34:14.494498 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2

      I1106 00:34:14.494510 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 -> /var/lib/minikube/images/coredns_1.6.2

      I1106 00:34:14.494515 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 -> /var/lib/minikube/images/kube-addon-manager_v9.0.2

      I1106 00:34:14.494534 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 -> /var/lib/minikube/images/storage-provisioner_v1.8.1

      I1106 00:34:14.494473 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4

      I1106 00:34:14.494462 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13

      I1106 00:34:14.514574 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 -> /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13

      I1106 00:34:14.494487 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13

      I1106 00:34:14.514717 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 -> /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13

      I1106 00:34:14.494401 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0

      I1106 00:34:14.514816 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 -> /var/lib/minikube/images/kube-apiserver_v1.16.0

      I1106 00:34:14.494433 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 -> /var/lib/minikube/images/kube-proxy_v1.16.0

      I1106 00:34:14.494389 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0

      I1106 00:34:14.515105 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 -> /var/lib/minikube/images/kube-controller-manager_v1.16.0

      I1106 00:34:14.494388 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13

      I1106 00:34:14.515270 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 -> /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13

      I1106 00:34:14.494447 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1

      I1106 00:34:14.515408 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 -> /var/lib/minikube/images/pause_3.1

      I1106 00:34:14.494475 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 -> /var/lib/minikube/images/etcd_3.3.15-0

      I1106 00:34:14.514519 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 -> /var/lib/minikube/images/dashboard_v2.0.0-beta4

      I1106 00:34:14.753211 183139 docker.go:97] Loading image: /var/lib/minikube/images/pause_3.1

      I1106 00:34:14.753251 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/pause_3.1

      I1106 00:34:16.322700 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache

      I1106 00:34:16.322739 183139 docker.go:97] Loading image: /var/lib/minikube/images/coredns_1.6.2

      I1106 00:34:16.322786 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/coredns_1.6.2

      I1106 00:34:16.585207 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 from cache

      I1106 00:34:16.585250 183139 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13

      I1106 00:34:16.585276 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13

      I1106 00:34:16.806829 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 from cache

      I1106 00:34:16.806896 183139 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13

      I1106 00:34:16.806915 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13

      I1106 00:34:17.004999 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 from cache

      I1106 00:34:17.005077 183139 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13

      I1106 00:34:17.005095 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13

      I1106 00:34:17.171361 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 from cache

      I1106 00:34:17.171401 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-addon-manager_v9.0.2

      I1106 00:34:17.171425 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-addon-manager_v9.0.2

      I1106 00:34:17.373411 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 from cache

      I1106 00:34:17.373454 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-scheduler_v1.16.0

      I1106 00:34:17.373519 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-scheduler_v1.16.0

      I1106 00:34:17.538912 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 from cache

      I1106 00:34:17.538959 183139 docker.go:97] Loading image: /var/lib/minikube/images/storage-provisioner_v1.8.1

      I1106 00:34:17.538972 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/storage-provisioner_v1.8.1

      I1106 00:34:17.727837 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache

      I1106 00:34:17.727878 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-proxy_v1.16.0

      I1106 00:34:17.727917 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-proxy_v1.16.0

      I1106 00:34:17.928240 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 from cache

      I1106 00:34:17.928280 183139 docker.go:97] Loading image: /var/lib/minikube/images/dashboard_v2.0.0-beta4

      I1106 00:34:17.928296 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/dashboard_v2.0.0-beta4

      I1106 00:34:18.135040 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 from cache

      I1106 00:34:18.135083 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.16.0

      I1106 00:34:18.135098 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.16.0

      I1106 00:34:18.364544 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 from cache

      I1106 00:34:18.364583 183139 docker.go:97] Loading image: /var/lib/minikube/images/etcd_3.3.15-0

      I1106 00:34:18.364604 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/etcd_3.3.15-0

      I1106 00:34:18.962958 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 from cache

      I1106 00:34:18.963000 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-apiserver_v1.16.0

      I1106 00:34:18.963008 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-apiserver_v1.16.0

      I1106 00:34:19.223237 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 from cache

      I1106 00:34:19.223316 183139 cache_images.go:119] Successfully loaded all cached images.

      I1106 00:34:19.223360 183139 cache_images.go:120] LoadImages end

      I1106 00:34:19.223592 183139 kubeadm.go:610] kubelet v1.16.0 config:

      [Unit]

      Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --resolv-conf=/run/systemd/resolve/resolv.conf

[Install]
I1106 00:34:19.223625 183139 exec_runner.go:40] Run: pgrep kubelet && sudo systemctl stop kubelet
W1106 00:34:19.257686 183139 kubeadm.go:615] unable to stop kubelet: running command: pgrep kubelet && sudo systemctl stop kubelet: exit status 1
I1106 00:34:19.258065 183139 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm
I1106 00:34:19.258089 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/v1.16.0/kubeadm -> /var/lib/minikube/binaries/v1.16.0/kubeadm
I1106 00:34:19.258075 183139 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet
I1106 00:34:19.258211 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/v1.16.0/kubelet -> /var/lib/minikube/binaries/v1.16.0/kubelet
I1106 00:34:19.862786 183139 exec_runner.go:40] Run: sudo systemctl daemon-reload && sudo systemctl start kubelet
I1106 00:34:20.089159 183139 certs.go:71] acquiring lock: {Name:setupCerts Clock:{} Delay:15s Timeout:0s Cancel:}
I1106 00:34:20.089310 183139 certs.go:79] Setting up /home/garretsidzaka/.minikube for IP: 66.55.156.94
I1106 00:34:20.089412 183139 crypto.go:69] Generating cert /home/garretsidzaka/.minikube/client.crt with IP's: []
I1106 00:34:20.099224 183139 crypto.go:157] Writing cert to /home/garretsidzaka/.minikube/client.crt ...
I1106 00:34:20.099264 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/client.crt" with filemode -rw-r--r--
I1106 00:34:20.099549 183139 crypto.go:165] Writing key to /home/garretsidzaka/.minikube/client.key ...
I1106 00:34:20.099565 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/client.key" with filemode -rw-------
I1106 00:34:20.099683 183139 crypto.go:69] Generating cert /home/garretsidzaka/.minikube/apiserver.crt with IP's: [66.55.156.94 10.96.0.1 10.0.0.1]
I1106 00:34:20.107698 183139 crypto.go:157] Writing cert to /home/garretsidzaka/.minikube/apiserver.crt ...
I1106 00:34:20.107730 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/apiserver.crt" with filemode -rw-r--r--
I1106 00:34:20.107995 183139 crypto.go:165] Writing key to /home/garretsidzaka/.minikube/apiserver.key ...
I1106 00:34:20.108028 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/apiserver.key" with filemode -rw-------
I1106 00:34:20.108174 183139 crypto.go:69] Generating cert /home/garretsidzaka/.minikube/proxy-client.crt with IP's: []
I1106 00:34:20.113461 183139 crypto.go:157] Writing cert to /home/garretsidzaka/.minikube/proxy-client.crt ...
I1106 00:34:20.114504 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/proxy-client.crt" with filemode -rw-r--r--
I1106 00:34:20.115320 183139 crypto.go:165] Writing key to /home/garretsidzaka/.minikube/proxy-client.key ...
I1106 00:34:20.115358 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/proxy-client.key" with filemode -rw-------
I1106 00:34:20.115979 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1106 00:34:20.116130 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1106 00:34:20.116470 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1106 00:34:20.116543 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1106 00:34:20.116596 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1106 00:34:20.116649 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1106 00:34:20.116676 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1106 00:34:20.116701 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1106 00:34:20.125897 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1106 00:34:20.131832 183139 exec_runner.go:40] Run: which openssl
I1106 00:34:20.133923 183139 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/minikubeCA.pem'
I1106 00:34:20.142240 183139 exec_runner.go:51] Run with output: openssl x509 -hash -noout -in '/usr/share/ca-certificates/minikubeCA.pem'
I1106 00:34:20.167041 183139 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/b5213941.0'

  • Pulling images ...
    I1106 00:34:20.173568 183139 exec_runner.go:40] Run: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm config images pull --config /var/tmp/minikube/kubeadm.yaml
  • Launching Kubernetes ...
    I1106 00:34:25.036103 183139 kubeadm.go:232] StartCluster: {KubernetesVersion:v1.16.0 NodeIP:66.55.156.94 NodePort:8443 NodeName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:true EnableDefaultCNI:false}
    I1106 00:34:25.036210 183139 exec_runner.go:51] Run with output: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
    I1106 00:34:48.752060 183139 kubeadm.go:273] Configuring cluster permissions ...
    I1106 00:34:48.755735 183139 kapi.go:58] client config for minikube: &rest.Config{Host:"https://66.55.156.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/garretsidzaka/.minikube/client.crt", KeyFile:"/home/garretsidzaka/.minikube/client.key", CAFile:"/home/garretsidzaka/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)}, UserAgent:"", Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x159bb40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
    I1106 00:34:48.806431 183139 util.go:67] duration metric: took 48.179105ms to wait for elevateKubeSystemPrivileges.
    I1106 00:34:48.806536 183139 exec_runner.go:51] Run with output: cat /proc/$(pgrep kube-apiserver)/oom_adj
    I1106 00:34:48.822242 183139 kubeadm.go:299] apiserver oom_adj: -16
    I1106 00:34:48.822378 183139 kubeadm.go:234] StartCluster complete in 23.786197888s
  • Configuring local host environment ...
    *
    ! The 'none' driver provides limited isolation and may reduce system security and reliability.
    ! For more information, see:

    • https://minikube.sigs.k8s.io/docs/reference/drivers/none/

      *

      ! kubectl and minikube configuration will be stored in /home/garretsidzaka

      ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

      *

    • sudo mv /home/garretsidzaka/.kube /home/garretsidzaka/.minikube $HOME

    • sudo chown -R $USER $HOME/.kube $HOME/.minikube

      *

  • This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
  • Waiting for: apiserverI1106 00:34:48.822799 183139 kubeadm.go:454] Waiting for apiserver process ...
    I1106 00:34:48.822811 183139 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    I1106 00:34:48.836640 183139 kubeadm.go:469] Waiting for apiserver to port healthy status ...
    I1106 00:34:48.843098 183139 kubeadm.go:156] https://66.55.156.94:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Wed, 06 Nov 2019 00:34:48 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00023abc0 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003db500 TLS:0xc0000da9a0}
    I1106 00:34:48.843155 183139 kubeadm.go:472] apiserver status: Running, err:
    I1106 00:34:48.843196 183139 kubeadm.go:451] duration metric: took 20.397306ms to wait for apiserver status ...
    I1106 00:34:48.843910 183139 kapi.go:58] client config for minikube: &rest.Config{Host:"https://66.55.156.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/garretsidzaka/.minikube/client.crt", KeyFile:"/home/garretsidzaka/.minikube/client.key", CAFile:"/home/garretsidzaka/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)}, UserAgent:"", Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x159bb40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
    proxyI1106 00:34:48.855030 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ...
    I1106 00:34:48.888057 183139 kapi.go:85] Found 0 Pods for label selector k8s-app=kube-proxy
    I1106 00:34:54.893546 183139 kapi.go:85] Found 1 Pods for label selector k8s-app=kube-proxy
    I1106 00:34:54.893719 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:55.427204 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:55.893749 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:56.391819 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:56.948005 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:57.435927 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:57.891800 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:58.396266 183139 kapi.go:107] duration metric: took 9.540858918s to wait for k8s-app=kube-proxy ...
    etcdI1106 00:34:58.396347 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "component=etcd" ...
    I1106 00:34:58.413918 183139 kapi.go:85] Found 0 Pods for label selector component=etcd
    I1106 00:36:02.416249 183139 kapi.go:85] Found 1 Pods for label selector component=etcd
    I1106 00:36:02.416274 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:02.918124 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:03.416366 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:03.917137 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:04.417727 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:04.917438 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:05.416793 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:05.916535 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:06.416567 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:06.916571 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:07.418177 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:07.919132 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:08.416704 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:08.916335 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:09.416439 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:09.917663 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:10.417931 183139 kapi.go:107] duration metric: took 1m12.021584246s to wait for component=etcd ...
    schedulerI1106 00:36:10.418024 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ...
    I1106 00:36:10.424973 183139 kapi.go:85] Found 1 Pods for label selector component=kube-scheduler
    I1106 00:36:10.425006 183139 kapi.go:107] duration metric: took 6.9829ms to wait for component=kube-scheduler ...
    controllerI1106 00:36:10.425077 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ...
    I1106 00:36:10.430103 183139 kapi.go:85] Found 1 Pods for label selector component=kube-controller-manager
    I1106 00:36:10.430133 183139 kapi.go:107] duration metric: took 5.055327ms to wait for component=kube-controller-manager ...
    dnsI1106 00:36:10.430164 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ...
    I1106 00:36:10.433305 183139 kapi.go:85] Found 2 Pods for label selector k8s-app=kube-dns
    I1106 00:36:10.433334 183139 kapi.go:107] duration metric: took 3.168055ms to wait for k8s-app=kube-dns ...

@GarretSidzaka I am curious when you said it hangs forever, did you mean in terminal it is stuck at this ?

medya@~/workspace/minikube (clean_cron) $ minikube dashboard
🔌  Enabling dashboard ...
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:65504/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

if that is the case, then that is the expected behaviour, minikube will hang there and run a webserver, so you can access the dashboard in your browser

i cannot load a local browser on a production headless VM, there is no X-server
hello the terminal is only getting this far now.

garretsidzaka@cloudstack:~$ sudo minikube dashboard

r on a production headless VM, there is no X-server
hello the terminal is only getting this far now.

the point of the dashboard is actually a UI experience for kubernetes, if you dont have a browser, you might not actually need dashboard.

do you mind sharing the output of curl for the dashboard url in a separate terminal ?

btw do you happen to use vpn or proxies ?

The output of this command would be helpful for us to help with debugging:

sudo minikube dashboard --alsologtostderr -v=1

It's worth noting that with sudo, the dashboard command will only output a URL rather than attempting to open a browser.

r on a production headless VM, there is no X-server
hello the terminal is only getting this far now.

the point of the dashboard is actually a UI experience for kubernetes, if you dont have a browser, you might not actually need dashboard.

do you mind sharing the output of curl for the dashboard url in a separate terminal ?

btw do you happen to use vpn or proxies ?

no unusual network. this is a front end bridged VM, production style. this network port has a static IP that is IANA, not NAT. there is no proxy or VPN. and yes its very nice to have this kind of research VM

The output of this command would be helpful for us to help with debugging:

sudo minikube dashboard --alsologtostderr -v=1

It's worth noting that with sudo, the dashboard command will only output a URL rather than attempting to open a browser.

yes and when i click such a link after obviously replacing 127 with the actual IP address, it gives 503.

attached is the log you requested
teraterm.log

Ping :3

Hey @GarretSidzaka -- looks like there are a couple of related issues (#4352 and #4749).

I see you already commented on #4352, and I'm guessing none of those solutions fixed your issue?

4749 suggests increasing memory/cpu allocation to minikube. Perhaps you could give that a try? Please let us know the results of any of these experiments!

The output of this command would be helpful for us to help with debugging:

sudo minikube dashboard --alsologtostderr -v=1

It's worth noting that with sudo, the dashboard command will only output a URL rather than attempting to open a browser.

@tstromberg

sudo minikube dashboard --alsologtostderr -v=1

I1113 18:16:48.105064   30332 none.go:257] checking for running kubelet ...
I1113 18:16:48.105097   30332 exec_runner.go:42] (ExecRunner) Run:  systemctl is-active --quiet service kubelet
🤔  Verifying dashboard health ...
I1113 18:16:48.136357   30332 service.go:236] Found service: &Service{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kubernetes-dashboard,GenerateName:,Namespace:kubernetes-dashboard,SelfLink:/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard,UID:50ffb80e-1e61-41a3-8ee6-15fec08c9d0c,ResourceVersion:384,Generation:0,CreationTimestamp:2019-11-13 18:08:36 +0000 GMT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,k8s-app: kubernetes-dashboard,kubernetes.io/minikube-addons: dashboard,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ServiceSpec{Ports:[{ TCP 80 {0 9090 } 0}],Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.8.235,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[],},},}
🚀  Launching proxy ...
I1113 18:16:48.136574   30332 dashboard.go:167] Executing: /usr/bin/kubectl [/usr/bin/kubectl --context minikube proxy --port=0]
I1113 18:16:48.136855   30332 dashboard.go:172] Waiting for kubectl to output host:port ...
I1113 18:16:48.294744   30332 dashboard.go:190] proxy stdout: Starting to serve on 127.0.0.1:43299
🤔  Verifying proxy health ...
I1113 18:16:48.304599   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:48 GMT]] Body:0xc00035d840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aa800 TLS:<nil>}
I1113 18:16:49.412722   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:49 GMT]] Body:0xc000297100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012ed00 TLS:<nil>}
I1113 18:16:51.579128   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:51 GMT]] Body:0xc0002971c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012ee00 TLS:<nil>}
I1113 18:16:54.225172   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:54 GMT]] Body:0xc0003cf940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000486300 TLS:<nil>}
I1113 18:16:57.412075   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:57 GMT]] Body:0xc00035d9c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aa900 TLS:<nil>}
I1113 18:17:02.101728   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:17:02 GMT]] Body:0xc0003cfa80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aaa00 TLS:<nil>}
I1113 18:17:11.123045   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:

And if I hit the URL which is giving 503, I get:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "no endpoints available for service \"http:kubernetes-dashboard:\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

Thanks for the update @sebinsua

Based on what you've shared, I now suspect that part of the issue may be that we are attempting to check the URL before checking if the pod is actually running. Why the dashboard service isn't healthy though, we still need to investigate. Do you mind helping us root cause this?

Once you see the dashboard hanging at "Verifying proxy health ...", can you get the output of and share it with us?

  • kubectl get po -n kubernetes-dashboard --show-labels
  • kubectl describe po -l k8s-app=kubernetes-dashboard -n kubernetes-dashboard
  • kubectl logs -l k8s-app=kubernetes-dashboard -n kubernetes-dashboard

Depending on what you share, I believe that part of the solution may be to insert a new health check that blocks until the pod is in 'Running' state, by calling client.CoreV1().Pods(ns).List() and checking that pod.Status.Phase == core.PodRunning before checking for the port here:

https://github.com/kubernetes/minikube/blob/72016f1012cdd6157a9df74e88b4fbe89ecc1e4f/cmd/minikube/cmd/dashboard.go#L112

What the

output.txt

these results were taking after the bug was replicated, in a separate SSH terminal. At the same time the proxy message was hanging in the other SSH window

this this outputted eventually in the main window:
X http://127.0.0.1:42337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ is not accessible: Temporary Error: unexpected response code: 503
garretsidzaka@cloudstack:~$

Interesting. Do you mind adding the output of 'kubectl describe node' as
well? Thanks!

On Wed, Nov 13, 2019, 6:19 PM GarretSidzaka notifications@github.com
wrote:

output.txt
https://github.com/kubernetes/minikube/files/3844296/output.txt

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/minikube/issues/5815?email_source=notifications&email_token=AAAYYMA76IJ6FPNETGAYGETQTSYTTA5CNFSM4JIBWDFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEALAAA#issuecomment-553693184,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAYYMBP527WNFL3THBZR4DQTSYTTANCNFSM4JIBWDFA
.

Interesting. Do you mind adding the output of 'kubectl describe node' as well? Thanks!
…
On Wed, Nov 13, 2019, 6:19 PM GarretSidzaka @.*> wrote: output.txt https://github.com/kubernetes/minikube/files/3844296/output.txt — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#5815?email_source=notifications&email_token=AAAYYMA76IJ6FPNETGAYGETQTSYTTA5CNFSM4JIBWDFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEALAAA#issuecomment-553693184>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAYYMBP527WNFL3THBZR4DQTSYTTANCNFSM4JIBWDFA .

sudo kubectl describe node
[sudo] password for garretsidzaka:
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 06 Nov 2019 00:34:43 +0000
Taints:
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 66.55.156.94
Hostname: minikube
Capacity:
cpu: 4
ephemeral-storage: 76366628Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8220652Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 70379484249
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8118252Ki
pods: 110
System Info:
Machine ID: b2bbbdb30f0c427595b8a91758ac298c
System UUID: 73FA3E87-061F-4111-9C29-9A2074FC4BEC
Boot ID: 369c5c7e-06d5-46e9-87f5-b597ebadce65
Kernel Version: 4.15.0-66-generic
OS Image: Ubuntu 18.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.7
Kubelet Version: v1.16.0
Kube-Proxy Version: v1.16.0
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-5644d7b6d9-hpm6t 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 8d
kube-system coredns-5644d7b6d9-m2rpm 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 8d
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-addon-manager-minikube 5m (0%) 0 (0%) 50Mi (0%) 0 (0%) 8d
kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-proxy-xcw6z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-m2sn9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kubernetes-dashboard kubernetes-dashboard-57f4cb4545-vkwpj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 755m (18%) 0 (0%)
memory 190Mi (2%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
garretsidzaka@cloudstack:~$

hi :heart_eyes:

have this issue solved?i got the same problem.

thoughts on this issue?

I have the same issue here.

minikube version

minikube version: v1.7.3
commit: 436667c819c324e35d7e839f8116b968a2d0a3ff

cat /etc/os-release

NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

reopening issue. i have given up hope of making minikube work on ubuntu, but it seems others are still trying.

# check pods
kubectl -n kubernetes-dashboard get pods

# check pod Events, dashboard-metrics-scraper-7b64584c5c-lf82k is a pod name
kubectl -n kubernetes-dashboard describe pod dashboard-metrics-scraper-7b64584c5c-lf82k

if you got below error

Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.22:53724->192.168.64.1:53: read: connection refused

problem reason maybe related to https://github.com/kubernetes/minikube/issues/3036

you can check if the machine has the software that listens port 53

if yes, please stop the software, then restart minikube cluster

@jk2K @GarretSidzaka sorry to hear about the bad experience, do you mind sharing the minikube version also the output of

minikube start --wait=true --alsologtostderr

I'm a newbie.

I have the exact same symptoms - it hangs for a while on the "Verifying proxy health" step and then finally errors out in a 503:

llorllale:~$ minikube dashboard
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
💣  http://127.0.0.1:44627/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ is not accessible: Temporary Error: unexpected response code: 503
llorllale:~$ 

There are two pods under the kubernetes-dashboard namespace:

llorllale:~$ kubectl -n kubernetes-dashboard get pods
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-84bfdf55ff-scw95   0/1     Pending   0          25m
kubernetes-dashboard-bc446cc64-5dkxx         0/1     Pending   0          25m
llorllale:~$ 

Describing both of these reveals events with warnings:

llorllale:~$ kubectl -n kubernetes-dashboard describe pod dashboard-metrics-scraper-84bfdf55ff-scw95
  (...removed extra bits for brevity...)
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  <unknown>          default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
  Warning  FailedScheduling  <unknown>          default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
  Warning  FailedScheduling  24m (x8 over 26m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.

Describing my minikube node I see:

llorllale:~$ kubectl describe node minikube
  (...removed extra bits for brevity...)
Taints:             node.kubernetes.io/disk-pressure:NoSchedule

I tried removing the taint:

llorllale:~$ kubectl taint nodes minikube node.kubernetes.io/disk-pressure:NoSchedule-
node/minikube untainted

... but that didn't work. The new pods failed scheduling with the same problem. I'm still left wondering if this is the issue?

docker system prune may have solved my issue?

I tried minikube dashboard immediately after docker system prune and ran into same issue. However, a few minutes later, minikube dashboard worked.

I have the same issue, but two pods under the kubernetes-dashboard namespace are running well.

Events:
  Type    Reason     Age   From                   Message
  ----    ------     ----  ----                   -------
  Normal  Scheduled  25m   default-scheduler      Successfully assigned kubernetes-dashboard/kubernetes-dashboard-696dbcc666-gbgwl to minikube-m03
  Normal  Pulled     25m   kubelet, minikube-m03  Container image "kubernetesui/dashboard:v2.0.0" already present on machine
  Normal  Created    25m   kubelet, minikube-m03  Created container kubernetes-dashboard
  Normal  Started    25m   kubelet, minikube-m03  Started container kubernetes-dashboard
Events:
  Type    Reason     Age   From                   Message
  ----    ------     ----  ----                   -------
  Normal  Scheduled  26m   default-scheduler      Successfully assigned kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff-gxdwz to minikube-m04
  Normal  Pulled     26m   kubelet, minikube-m04  Container image "kubernetesui/metrics-scraper:v1.0.2" already present on machine
  Normal  Created    26m   kubelet, minikube-m04  Created container dashboard-metrics-scraper
  Normal  Started    26m   kubelet, minikube-m04  Started container dashboard-metrics-scraper

So I tried port-forward (kubectl -n kubernetes-dashboard port-forward 9090:9090), and I can access dashboard via http://127.0.0.1:9090/

Changing this to a feature so that users can see why a dashboard deployment is blocked. There are many possibilities outlined here, including DiskPressure.

I support this feature to implemented in a way that in some addons before "openning" or "enabling" we check the node conditions to get better errors for usrs.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Was this page helpful?
0 / 5 - 0 ratings