After installing , apiserver is not coming up(apiserver: Stopped)
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured
prabs@LAPTOP-HQ5LK73I ~
$ pwd
/home/prabs
prabs@LAPTOP-HQ5LK73I ~
$ kubectl
kubectl controls the Kubernetes cluster manager.
Find more information at:
https://kubernetes.io/docs/reference/kubectl/overview/
Basic Commands (Beginner):
create Create a resource from a file or from stdin.
expose Take a replication controller, service, deployment or pod and
expose it as a new Kubernetes Service
run Run a particular image on the cluster
set Set specific features on objects
Basic Commands (Intermediate):
explain Documentation of resources
get Display one or many resources
edit Edit a resource on the server
delete Delete resources by filenames, stdin, resources and names, or by
resources and label selector
Deploy Commands:
rollout Manage the rollout of a resource
scale Set a new size for a Deployment, ReplicaSet or Replication
Controller
autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController
Cluster Management Commands:
certificate Modify certificate resources.
cluster-info Display cluster info
top Display Resource (CPU/Memory/Storage) usage.
cordon Mark node as unschedulable
uncordon Mark node as schedulable
drain Drain node in preparation for maintenance
taint Update the taints on one or more nodes
Troubleshooting and Debugging Commands:
describe Show details of a specific resource or group of resources
logs Print the logs for a container in a pod
attach Attach to a running container
exec Execute a command in a container
port-forward Forward one or more local ports to a pod
proxy Run a proxy to the Kubernetes API server
cp Copy files and directories to and from containers.
auth Inspect authorization
Advanced Commands:
diff Diff live version against would-be applied version
apply Apply a configuration to a resource by filename or stdin
patch Update field(s) of a resource using strategic merge patch
replace Replace a resource by filename or stdin
wait Experimental: Wait for a specific condition on one or many
resources.
convert Convert config files between different API versions
kustomize Build a kustomization target from a directory or a remote url.
Settings Commands:
label Update the labels on a resource
annotate Update the annotations on a resource
completion Output shell completion code for the specified shell (bash or
zsh)
Other Commands:
alpha Commands for features in alpha
api-resources Print the supported API resources on the server
api-versions Print the supported API versions on the server, in the form of
"group/version"
config Modify kubeconfig files
plugin Provides utilities for interacting with plugins.
version Print the client and server version information
Usage:
kubectl [flags] [options]
Use "kubectl --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all
commands).
prabs@LAPTOP-HQ5LK73I ~
$ minikube
minikube provisions and manages local Kubernetes clusters optimized for
development workflows.
Basic Commands:
start Starts a local Kubernetes cluster
status Gets the status of a local Kubernetes cluster
stop Stops a running local Kubernetes cluster
delete Deletes a local Kubernetes cluster
dashboard Access the Kubernetes dashboard running within the minikube
cluster
pause pause Kubernetes
unpause unpause Kubernetes
Images Commands:
docker-env Configure environment to use minikube's Docker daemon
podman-env Configure environment to use minikube's Podman service
cache Add, delete, or push a local image into minikube
Configuration and Management Commands:
addons Enable or disable a minikube addon
config Modify persistent configuration values
profile Get or list the current profiles (clusters)
update-context Update kubeconfig in case of an IP or port change
Networking and Connectivity Commands:
service Returns a URL to connect to a service
tunnel Connect to LoadBalancer services
Advanced Commands:
mount Mounts the specified directory into minikube
ssh Log into the minikube environment (for debugging)
kubectl Run a kubectl binary matching the cluster version
node Add, remove, or list additional nodes
Troubleshooting Commands:
ssh-key Retrieve the ssh identity key path of the specified cluster
ip Retrieves the IP address of the running cluster
logs Returns logs to debug a local Kubernetes cluster
update-check Print current and latest version number
version Print the version of minikube
Other Commands:
completion Generate command completion for a shell
Use "minikube --help" for more information about a given command.
prabs@LAPTOP-HQ5LK73I ~
$ minikube
minikube provisions and manages local Kubernetes clusters optimized for
development workflows.
Basic Commands:
start Starts a local Kubernetes cluster
status Gets the status of a local Kubernetes cluster
stop Stops a running local Kubernetes cluster
delete Deletes a local Kubernetes cluster
dashboard Access the Kubernetes dashboard running within the minikube
cluster
pause pause Kubernetes
unpause unpause Kubernetes
Images Commands:
docker-env Configure environment to use minikube's Docker daemon
podman-env Configure environment to use minikube's Podman service
cache Add, delete, or push a local image into minikube
Configuration and Management Commands:
addons Enable or disable a minikube addon
config Modify persistent configuration values
profile Get or list the current profiles (clusters)
update-context Update kubeconfig in case of an IP or port change
Networking and Connectivity Commands:
service Returns a URL to connect to a service
tunnel Connect to LoadBalancer services
Advanced Commands:
mount Mounts the specified directory into minikube
ssh Log into the minikube environment (for debugging)
kubectl Run a kubectl binary matching the cluster version
node Add, remove, or list additional nodes
Troubleshooting Commands:
ssh-key Retrieve the ssh identity key path of the specified cluster
ip Retrieves the IP address of the running cluster
logs Returns logs to debug a local Kubernetes cluster
update-check Print current and latest version number
version Print the version of minikube
Other Commands:
completion Generate command completion for a shell
Use "minikube --help" for more information about a given command.
prabs@LAPTOP-HQ5LK73I ~
$ minikube start
- minikube v1.14.1 on Microsoft Windows 10 Home Single Language 10.0.19041 Build
19041
- Using the virtualbox driver based on existing profile
- Starting control plane node minikube in cluster minikube
- virtualbox "minikube" VM is missing, will recreate.
- Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
- Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
- Verifying Kubernetes components...
! Enabling 'default-storageclass' returned an error: running callbacks: [Error m
aking standard the default storage class: Error listing StorageClasses: Get "htt
ps://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.16
8.99.103:8443: connectex: No connection could be made because the target machine
actively refused it.]
X Problems detected in kubelet:
- Oct 25 06:13:16 minikube kubelet[4555]: E1025 06:13:16.082132 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:13:21 minikube kubelet[4555]: E1025 06:13:21.705715 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:13:24 minikube kubelet[4555]: E1025 06:13:24.364639 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 10s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:13:48 minikube kubelet[4555]: E1025 06:13:48.709706 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:13:51 minikube kubelet[4555]: E1025 06:13:51.703572 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
X Problems detected in kubelet:
- Oct 25 06:13:48 minikube kubelet[4555]: E1025 06:13:48.709706 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:13:51 minikube kubelet[4555]: E1025 06:13:51.703572 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:13:57 minikube kubelet[4555]: E1025 06:13:57.049762 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:14:03 minikube kubelet[4555]: E1025 06:14:03.637975 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:14:05 minikube kubelet[4555]: E1025 06:14:05.730360 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
X Problems detected in kubelet:
- Oct 25 06:13:51 minikube kubelet[4555]: E1025 06:13:51.703572 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:13:57 minikube kubelet[4555]: E1025 06:13:57.049762 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:14:03 minikube kubelet[4555]: E1025 06:14:03.637975 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:14:05 minikube kubelet[4555]: E1025 06:14:05.730360 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:14:14 minikube kubelet[4555]: E1025 06:14:14.725374 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Problems detected in kubelet:
- Oct 25 06:14:14 minikube kubelet[4555]: E1025 06:14:14.725374 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:14:37 minikube kubelet[4555]: E1025 06:14:37.775172 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:14:38 minikube kubelet[4555]: E1025 06:14:38.829145 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:14:41 minikube kubelet[4555]: E1025 06:14:41.701903 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:14:43 minikube kubelet[4555]: E1025 06:14:43.637554 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Problems detected in kubelet:
- Oct 25 06:14:38 minikube kubelet[4555]: E1025 06:14:38.829145 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:14:41 minikube kubelet[4555]: E1025 06:14:41.701903 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:14:43 minikube kubelet[4555]: E1025 06:14:43.637554 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:14:54 minikube kubelet[4555]: E1025 06:14:54.725615 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:14:57 minikube kubelet[4555]: E1025 06:14:57.735687 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUB
ECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl
apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with st
atus 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
]
- Enabled addons:
X Problems detected in kubelet:
- Oct 25 06:14:43 minikube kubelet[4555]: E1025 06:14:43.637554 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:14:54 minikube kubelet[4555]: E1025 06:14:54.725615 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:14:57 minikube kubelet[4555]: E1025 06:14:57.735687 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:15:06 minikube kubelet[4555]: E1025 06:15:06.725534 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:15:12 minikube kubelet[4555]: E1025 06:15:12.725982 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Problems detected in kubelet:
- Oct 25 06:15:06 minikube kubelet[4555]: E1025 06:15:06.725534 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:15:12 minikube kubelet[4555]: E1025 06:15:12.725982 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:15:38 minikube kubelet[4555]: E1025 06:15:38.181491 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:15:41 minikube kubelet[4555]: E1025 06:15:41.702214 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:15:46 minikube kubelet[4555]: E1025 06:15:46.368810 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Problems detected in kubelet:
- Oct 25 06:15:41 minikube kubelet[4555]: E1025 06:15:41.702214 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:15:46 minikube kubelet[4555]: E1025 06:15:46.368810 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:15:53 minikube kubelet[4555]: E1025 06:15:53.636966 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:15:56 minikube kubelet[4555]: E1025 06:15:56.753041 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:05 minikube kubelet[4555]: E1025 06:16:05.724682 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Problems detected in kubelet:
- Oct 25 06:15:53 minikube kubelet[4555]: E1025 06:15:53.636966 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:15:56 minikube kubelet[4555]: E1025 06:15:56.753041 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:05 minikube kubelet[4555]: E1025 06:16:05.724682 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:16:11 minikube kubelet[4555]: E1025 06:16:11.727503 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:17 minikube kubelet[4555]: E1025 06:16:17.724472 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Problems detected in kubelet:
- Oct 25 06:16:05 minikube kubelet[4555]: E1025 06:16:05.724682 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:16:11 minikube kubelet[4555]: E1025 06:16:11.727503 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:17 minikube kubelet[4555]: E1025 06:16:17.724472 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:16:25 minikube kubelet[4555]: E1025 06:16:25.725273 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:32 minikube kubelet[4555]: E1025 06:16:32.731285 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Problems detected in kubelet:
- Oct 25 06:16:17 minikube kubelet[4555]: E1025 06:16:17.724472 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:16:25 minikube kubelet[4555]: E1025 06:16:25.725273 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:32 minikube kubelet[4555]: E1025 06:16:32.731285 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:16:39 minikube kubelet[4555]: E1025 06:16:39.725255 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:45 minikube kubelet[4555]: E1025 06:16:45.723977 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Problems detected in kubelet:
- Oct 25 06:16:32 minikube kubelet[4555]: E1025 06:16:32.731285 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:16:39 minikube kubelet[4555]: E1025 06:16:39.725255 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:45 minikube kubelet[4555]: E1025 06:16:45.723977 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:16:54 minikube kubelet[4555]: E1025 06:16:54.728442 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:59 minikube kubelet[4555]: E1025 06:16:59.731056 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Problems detected in kubelet:
- Oct 25 06:16:32 minikube kubelet[4555]: E1025 06:16:32.731285 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:16:39 minikube kubelet[4555]: E1025 06:16:39.725255 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:45 minikube kubelet[4555]: E1025 06:16:45.723977 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:16:54 minikube kubelet[4555]: E1025 06:16:54.728442 4555 pod_wo
rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
"StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
b1ef52506bd93c04ce27fa412a22c055)"
- Oct 25 06:16:59 minikube kubelet[4555]: E1025 06:16:59.731056 4555 pod_wo
rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
X Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: a
piserver healthz never reported healthy: timed out waiting for the condition
*
- If the above advice does not help, please let us know:
prabs@LAPTOP-HQ5LK73I ~
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured
prabs@LAPTOP-HQ5LK73I ~
$ minikube logs
- ==> Docker <==
- -- Logs begin at Sun 2020-10-25 06:09:38 UTC, end at Sun 2020-10-25 06:26:06 U
TC. --
- Oct 25 06:12:00 minikube dockerd[2726]: time="2020-10-25T06:12:00.020221768Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/66a
583888c5044962da96b721db3188b7c2c9e6873c23aaa160126b0d369f1ee/shim.sock" debug=f
alse pid=3565
- Oct 25 06:12:00 minikube dockerd[2726]: time="2020-10-25T06:12:00.297757565Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec8
b64f4ed5f1d1ce7e81ae2d9f80a7fca90181abd0294e46a55ff04158861ea/shim.sock" debug=f
alse pid=3596
- Oct 25 06:12:00 minikube dockerd[2726]: time="2020-10-25T06:12:00.467490792Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/24d
45b930473dab074164da7fd9da8e49c084833e2ba18179100d67a2d059851/shim.sock" debug=f
alse pid=3616
- Oct 25 06:12:00 minikube dockerd[2726]: time="2020-10-25T06:12:00.684569386Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/09e
773dd2f7dc0044b93193fb7c5f8021b1e7e4f8cf4e1c81154747fd17d72ae/shim.sock" debug=f
alse pid=3642
- Oct 25 06:12:02 minikube dockerd[2726]: time="2020-10-25T06:12:02.736296964Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f66
f4eb65b91137193a0348c8f3ec7e060e946c337e04e4e9c3a0f36698fbefe/shim.sock" debug=f
alse pid=3759
- Oct 25 06:12:02 minikube dockerd[2726]: time="2020-10-25T06:12:02.905413703Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ee9
9e49f93a5bb07ca1eb8955a5e93ec07589e5f2b31fff936c7319b90aff216/shim.sock" debug=f
alse pid=3770
- Oct 25 06:12:04 minikube dockerd[2726]: time="2020-10-25T06:12:04.570158222Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bd1
4a9b4bd1211b83959d91131fc32478580710c68effdb7ff76c56c232d81cd/shim.sock" debug=f
alse pid=3895
- Oct 25 06:12:10 minikube dockerd[2726]: time="2020-10-25T06:12:10.616190097Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d90
449cdc5368ad3211d3cf09b7bd08e7f165f850f09fa552fbaa35e85772d33/shim.sock" debug=f
alse pid=4079
- Oct 25 06:12:23 minikube dockerd[2726]: time="2020-10-25T06:12:23.360448819Z"
level=info msg="shim reaped" id=f66f4eb65b91137193a0348c8f3ec7e060e946c337e04e4e
9c3a0f36698fbefe
- Oct 25 06:12:23 minikube dockerd[2719]: time="2020-10-25T06:12:23.401378110Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:12:25 minikube dockerd[2726]: time="2020-10-25T06:12:25.729134806Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fe2
19357f3760a4d774e04a5d4dd9266f43de704c524a0a7a1cc013d221950d6/shim.sock" debug=f
alse pid=4273
- Oct 25 06:12:25 minikube dockerd[2726]: time="2020-10-25T06:12:25.797715419Z"
level=info msg="shim reaped" id=bd14a9b4bd1211b83959d91131fc32478580710c68effdb7
ff76c56c232d81cd
- Oct 25 06:12:25 minikube dockerd[2719]: time="2020-10-25T06:12:25.810695459Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:12:28 minikube dockerd[2726]: time="2020-10-25T06:12:28.131521661Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/20c
d686b405e69a07fae76def6adb84563f0214e2a38692fb7d2b3f4d7b05c43/shim.sock" debug=f
alse pid=4394
- Oct 25 06:12:41 minikube dockerd[2726]: time="2020-10-25T06:12:41.837066776Z"
level=info msg="shim reaped" id=fe219357f3760a4d774e04a5d4dd9266f43de704c524a0a7
a1cc013d221950d6
- Oct 25 06:12:41 minikube dockerd[2719]: time="2020-10-25T06:12:41.845668167Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:12:50 minikube dockerd[2726]: time="2020-10-25T06:12:50.984298125Z"
level=info msg="shim reaped" id=20cd686b405e69a07fae76def6adb84563f0214e2a38692f
b7d2b3f4d7b05c43
- Oct 25 06:12:50 minikube dockerd[2719]: time="2020-10-25T06:12:50.996220647Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:12:58 minikube dockerd[2726]: time="2020-10-25T06:12:58.253293092Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/43f
c764666335600796026d2096707c80663b7804851a8d942bf108ef68dcc6f/shim.sock" debug=f
alse pid=4880
- Oct 25 06:12:58 minikube dockerd[2726]: time="2020-10-25T06:12:58.285637778Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/40b
73082472f7a0363ee9a5b810b14d3656febab1580e2568d12ea9d1664c718/shim.sock" debug=f
alse pid=4885
- Oct 25 06:13:15 minikube dockerd[2726]: time="2020-10-25T06:13:15.034936581Z"
level=info msg="shim reaped" id=43fc764666335600796026d2096707c80663b7804851a8d9
42bf108ef68dcc6f
- Oct 25 06:13:15 minikube dockerd[2719]: time="2020-10-25T06:13:15.068143484Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:13:23 minikube dockerd[2726]: time="2020-10-25T06:13:23.653492942Z"
level=info msg="shim reaped" id=40b73082472f7a0363ee9a5b810b14d3656febab1580e256
8d12ea9d1664c718
- Oct 25 06:13:23 minikube dockerd[2719]: time="2020-10-25T06:13:23.665446470Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:13:32 minikube dockerd[2726]: time="2020-10-25T06:13:32.965214332Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/663
3a5694df78bcd58ecd5d36ad1dc09ac4eb853a66c3bb35603b855003424ac/shim.sock" debug=f
alse pid=5166
- Oct 25 06:13:35 minikube dockerd[2726]: time="2020-10-25T06:13:35.065584112Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9b9
5c363880438c4c22645edaaf8cb5465f1195bf30e07642f592b50985e611c/shim.sock" debug=f
alse pid=5206
- Oct 25 06:13:47 minikube dockerd[2726]: time="2020-10-25T06:13:47.918236780Z"
level=info msg="shim reaped" id=6633a5694df78bcd58ecd5d36ad1dc09ac4eb853a66c3bb3
5603b855003424ac
- Oct 25 06:13:47 minikube dockerd[2719]: time="2020-10-25T06:13:47.931432268Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:13:56 minikube dockerd[2726]: time="2020-10-25T06:13:56.072905863Z"
level=info msg="shim reaped" id=9b95c363880438c4c22645edaaf8cb5465f1195bf30e0764
2f592b50985e611c
- Oct 25 06:13:56 minikube dockerd[2719]: time="2020-10-25T06:13:56.084666494Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:14:17 minikube dockerd[2726]: time="2020-10-25T06:14:17.033380643Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/953
6e6a81db94399860e073a9f3d37b165fa6e616be24b642b7aa1312f8d4253/shim.sock" debug=f
alse pid=5604
- Oct 25 06:14:31 minikube dockerd[2726]: time="2020-10-25T06:14:31.228445502Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/76b
0e48ce6a09d05922aa184e1bbabe941e76fdded55037b03506acdd80a2294/shim.sock" debug=f
alse pid=5715
- Oct 25 06:14:37 minikube dockerd[2726]: time="2020-10-25T06:14:37.292840239Z"
level=info msg="shim reaped" id=9536e6a81db94399860e073a9f3d37b165fa6e616be24b64
2b7aa1312f8d4253
- Oct 25 06:14:37 minikube dockerd[2719]: time="2020-10-25T06:14:37.303658218Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:14:38 minikube dockerd[2726]: time="2020-10-25T06:14:38.368520285Z"
level=info msg="shim reaped" id=76b0e48ce6a09d05922aa184e1bbabe941e76fdded55037b
03506acdd80a2294
- Oct 25 06:14:38 minikube dockerd[2719]: time="2020-10-25T06:14:38.379326509Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:15:20 minikube dockerd[2726]: time="2020-10-25T06:15:20.174385961Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/be4
ceff68755474f27153c4131c085b6b7bd2e773d3a469517b12903571c64b1/shim.sock" debug=f
alse pid=6188
- Oct 25 06:15:28 minikube dockerd[2726]: time="2020-10-25T06:15:28.197777084Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/92a
5d6ba34e7d7e867e0454bfff6a9552e18c5c55612af390f7b68bf70ba41a9/shim.sock" debug=f
alse pid=6306
- Oct 25 06:15:37 minikube dockerd[2726]: time="2020-10-25T06:15:37.815812699Z"
level=info msg="shim reaped" id=be4ceff68755474f27153c4131c085b6b7bd2e773d3a4695
17b12903571c64b1
- Oct 25 06:15:37 minikube dockerd[2719]: time="2020-10-25T06:15:37.825983466Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:15:46 minikube dockerd[2726]: time="2020-10-25T06:15:46.253220919Z"
level=info msg="shim reaped" id=92a5d6ba34e7d7e867e0454bfff6a9552e18c5c55612af39
0f7b68bf70ba41a9
- Oct 25 06:15:46 minikube dockerd[2719]: time="2020-10-25T06:15:46.268137344Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:17:08 minikube dockerd[2726]: time="2020-10-25T06:17:08.059569467Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a76
e86f6940d325eba13721cf27732b21d38b027018f685f49daa17cf37db176/shim.sock" debug=f
alse pid=7145
- Oct 25 06:17:14 minikube dockerd[2726]: time="2020-10-25T06:17:14.380077682Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/49a
c1c2769906fbc3b020e1fc93d4b3b3f123f6b0efdad7c2b8862a5af48a731/shim.sock" debug=f
alse pid=7198
- Oct 25 06:17:25 minikube dockerd[2726]: time="2020-10-25T06:17:25.350410289Z"
level=info msg="shim reaped" id=a76e86f6940d325eba13721cf27732b21d38b027018f685f
49daa17cf37db176
- Oct 25 06:17:25 minikube dockerd[2719]: time="2020-10-25T06:17:25.362212905Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:17:31 minikube dockerd[2726]: time="2020-10-25T06:17:31.584644094Z"
level=info msg="shim reaped" id=49ac1c2769906fbc3b020e1fc93d4b3b3f123f6b0efdad7c
2b8862a5af48a731
- Oct 25 06:17:31 minikube dockerd[2719]: time="2020-10-25T06:17:31.596091684Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:20:12 minikube dockerd[2726]: time="2020-10-25T06:20:12.041920456Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85d
07903c6b7f484cba5be26a994c56f38ad07f08ab99758dd12db3dc55530d2/shim.sock" debug=f
alse pid=7660
- Oct 25 06:20:12 minikube dockerd[2726]: time="2020-10-25T06:20:12.260767117Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/590
17094cef039db98b55fe1ae6b25764ec9d54633357cf4d8e47aa771e477cf/shim.sock" debug=f
alse pid=7684
- Oct 25 06:20:28 minikube dockerd[2726]: time="2020-10-25T06:20:28.495390845Z"
level=info msg="shim reaped" id=85d07903c6b7f484cba5be26a994c56f38ad07f08ab99758
dd12db3dc55530d2
- Oct 25 06:20:28 minikube dockerd[2719]: time="2020-10-25T06:20:28.507377594Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:20:36 minikube dockerd[2726]: time="2020-10-25T06:20:36.346696670Z"
level=info msg="shim reaped" id=59017094cef039db98b55fe1ae6b25764ec9d54633357cf4
d8e47aa771e477cf
- Oct 25 06:20:36 minikube dockerd[2719]: time="2020-10-25T06:20:36.357051439Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:25:33 minikube dockerd[2726]: time="2020-10-25T06:25:33.030269288Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec0
29b977f03285e6d4b2256243c079e03797a57f639a34c152a38247aa8c6b5/shim.sock" debug=f
alse pid=8257
- Oct 25 06:25:39 minikube dockerd[2726]: time="2020-10-25T06:25:39.442038530Z"
level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c81
25a665e2bd3705eb5a7736ff5804d41ee8b683d8d14c45d3941dcb9c2f5ba/shim.sock" debug=f
alse pid=8306
- Oct 25 06:25:46 minikube dockerd[2726]: time="2020-10-25T06:25:46.196415250Z"
level=info msg="shim reaped" id=ec029b977f03285e6d4b2256243c079e03797a57f639a34c
152a38247aa8c6b5
- Oct 25 06:25:46 minikube dockerd[2719]: time="2020-10-25T06:25:46.219187213Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
- Oct 25 06:25:55 minikube dockerd[2726]: time="2020-10-25T06:25:55.063724220Z"
level=info msg="shim reaped" id=c8125a665e2bd3705eb5a7736ff5804d41ee8b683d8d14c4
5d3941dcb9c2f5ba
- Oct 25 06:25:55 minikube dockerd[2719]: time="2020-10-25T06:25:55.075413861Z"
level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
/delete type="*events.TaskDelete"
*
- ==> container status <==
- CONTAINER IMAGE CREATED STATE
NAME ATTEMPT POD ID
- c8125a665e2bd 8603821e1a7a5 28 seconds ago Exited
kube-controller-manager 8 66a583888c504
- ec029b977f032 607331163122e 34 seconds ago Exited
kube-apiserver 8 ec8b64f4ed5f1
- d90449cdc5368 0369cf4303ffd 13 minutes ago Running
etcd 0 09e773dd2f7dc
- ee99e49f93a5b 2f32d66b884f8 14 minutes ago Running
kube-scheduler 0 24d45b930473d
*
- ==> describe nodes <==
E1025 11:56:07.366051 4672 logs.go:181] command /bin/bash -c "sudo /var/lib/m
inikube/binaries/v1.19.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/k
ubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.1
9.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process e
xited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the ri
ght host or port?
output: "n* stderr * nThe connection to the server localhost:8443 was refus
ed - did you specify the right host or port?nn* /stderr *"
*
- ==> dmesg <==
- [ +5.006848] hpet1: lost 318 rtc interrupts
- [ +5.005396] hpet1: lost 319 rtc interrupts
- [ +5.009981] hpet1: lost 318 rtc interrupts
- [ +5.009094] hpet1: lost 319 rtc interrupts
- [ +5.005143] hpet1: lost 318 rtc interrupts
- [ +5.013739] hpet1: lost 319 rtc interrupts
- [ +5.003834] hpet1: lost 318 rtc interrupts
- [ +5.008545] hpet1: lost 319 rtc interrupts
- [ +5.012241] hpet1: lost 319 rtc interrupts
- [ +5.005051] hpet1: lost 318 rtc interrupts
- [ +5.007206] hpet1: lost 318 rtc interrupts
- [Oct25 06:22] hpet1: lost 319 rtc interrupts
- [ +5.010479] hpet1: lost 319 rtc interrupts
- [ +5.009737] hpet1: lost 318 rtc interrupts
- [ +5.014631] hpet1: lost 319 rtc interrupts
- [ +5.004600] hpet1: lost 318 rtc interrupts
- [ +5.021970] hpet1: lost 320 rtc interrupts
- [ +5.015169] hpet1: lost 319 rtc interrupts
- [ +5.001754] hpet1: lost 318 rtc interrupts
- [ +5.002750] hpet1: lost 318 rtc interrupts
- [ +5.006306] hpet1: lost 318 rtc interrupts
- [ +4.999647] hpet1: lost 318 rtc interrupts
- [ +5.003289] hpet1: lost 319 rtc interrupts
- [Oct25 06:23] hpet1: lost 318 rtc interrupts
- [ +5.000948] hpet1: lost 318 rtc interrupts
- [ +5.002684] hpet1: lost 318 rtc interrupts
- [ +5.001893] hpet1: lost 318 rtc interrupts
- [ +5.003523] hpet1: lost 318 rtc interrupts
- [ +5.003352] hpet1: lost 319 rtc interrupts
- [ +5.005414] hpet1: lost 319 rtc interrupts
- [ +5.004002] hpet1: lost 318 rtc interrupts
- [ +5.003522] hpet1: lost 318 rtc interrupts
- [ +5.006099] hpet1: lost 319 rtc interrupts
- [ +4.998740] hpet1: lost 318 rtc interrupts
- [ +5.007079] hpet1: lost 318 rtc interrupts
- [Oct25 06:24] hpet1: lost 318 rtc interrupts
- [ +5.000726] hpet1: lost 318 rtc interrupts
- [ +5.001146] hpet1: lost 318 rtc interrupts
- [ +5.004422] hpet1: lost 319 rtc interrupts
- [ +5.000742] hpet1: lost 318 rtc interrupts
- [ +5.009486] hpet1: lost 318 rtc interrupts
- [ +4.997366] hpet1: lost 319 rtc interrupts
- [ +5.003636] hpet1: lost 318 rtc interrupts
- [ +5.002427] hpet1: lost 318 rtc interrupts
- [ +5.003132] hpet1: lost 319 rtc interrupts
- [ +4.999895] hpet1: lost 318 rtc interrupts
- [ +5.001474] hpet1: lost 318 rtc interrupts
- [Oct25 06:25] hpet1: lost 318 rtc interrupts
- [ +4.998311] hpet1: lost 318 rtc interrupts
- [ +5.003937] hpet1: lost 318 rtc interrupts
- [ +5.002298] hpet1: lost 318 rtc interrupts
- [ +5.001572] hpet1: lost 318 rtc interrupts
- [ +5.001586] hpet1: lost 319 rtc interrupts
- [ +5.001776] hpet1: lost 318 rtc interrupts
- [ +5.039605] hpet1: lost 320 rtc interrupts
- [ +4.994218] hpet1: lost 165 rtc interrupts
- [ +5.016257] hpet1: lost 472 rtc interrupts
- [ +5.009939] hpet1: lost 318 rtc interrupts
- [ +5.009795] hpet1: lost 319 rtc interrupts
- [Oct25 06:26] hpet1: lost 319 rtc interrupts
*
- ==> etcd [d90449cdc536] <==
- 2020-10-25 06:16:38.850409 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:16:48.851664 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:16:58.849105 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:17:09.612398 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:17:19.285852 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:17:22.109334 W | etcdserver: read-only range request "key:\"/reg
istry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "r
ange_response_count:2 size:1762" took too long (186.130059ms) to execute
- 2020-10-25 06:17:22.151733 W | etcdserver: read-only range request "key:\"/reg
istry/priorityclasses/system-node-critical\" " with result "range_response_count
:1 size:441" took too long (150.516019ms) to execute
- 2020-10-25 06:17:28.851800 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:17:38.849380 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:17:48.853454 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:17:58.852160 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:18:08.857215 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:18:18.848655 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:18:28.850208 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:18:38.849393 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:18:48.849654 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:18:58.852085 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:19:08.847330 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:19:18.849802 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:19:28.850522 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:19:38.849482 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:19:48.855718 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:19:58.853422 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:20:08.851258 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:20:18.884683 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:20:28.849606 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:20:38.854284 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:20:48.849588 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:20:58.850419 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:21:08.853284 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:21:18.848270 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:21:28.850423 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:21:38.855835 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:21:48.851098 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:21:58.852336 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:22:08.852726 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:22:18.850396 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:22:28.850769 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:22:38.854377 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:22:48.855099 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:22:58.848218 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:23:08.852174 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:23:18.851025 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:23:28.850720 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:23:38.855102 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:23:48.851750 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:23:58.847616 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:24:08.848815 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:24:18.848489 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:24:28.851778 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:24:38.852047 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:24:48.850050 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:24:58.851330 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:25:08.856704 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:25:18.851464 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:25:28.849598 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:25:39.029884 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:25:44.738274 W | etcdserver: read-only range request "key:\"/reg
istry/ranges/serviceips\" " with result "range_response_count:1 size:118" took t
oo long (162.156632ms) to execute
- 2020-10-25 06:25:48.850480 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
- 2020-10-25 06:25:58.848274 I | etcdserver/api/etcdhttp: /health OK (status cod
e 200)
*
- ==> kernel <==
- 06:26:07 up 17 min, 0 users, load average: 0.66, 1.05, 1.07
- Linux minikube 4.19.114 #1 SMP Mon Oct 12 16:32:58 PDT 2020 x86_64 GNU/Linux
- PRETTY_NAME="Buildroot 2020.02.6"
*
- ==> kube-apiserver [ec029b977f03] <==
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:279 +0xbd
- created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextF
orChannel
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:278 +0x8c
*
- goroutine 1891 [select]:
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc0
11003800, 0xdf8475800, 0x0, 0xc011003740)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:588 +0x17b
- created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.f
unc1
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:571 +0x8c
*
- goroutine 2101 [chan receive]:
- k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run
.func1()
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/sh
ared_informer.go:772 +0x5d
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(
0xc00be32760)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:155 +0x5f
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00c
05df60, 0x503b6e0, 0xc0056cec00, 0x3ee5901, 0xc001c2fc20)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:156 +0xad
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00be
32760, 0x3b9aca00, 0x0, 0x1, 0xc001c2fc20)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:133 +0x98
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:90
- k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run
(0xc008a1c680)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/sh
ared_informer.go:771 +0x95
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func
1(0xc0087956b0, 0xc00bcf6b00)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:73 +0x51
- created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group)
.Start
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:71 +0x65
*
- goroutine 1893 [chan receive]:
- k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0
xc008795650, 0xc011003860)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/sh
ared_informer.go:628 +0x53
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithC
hannel.func1()
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:56 +0x2e
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func
1(0xc010f93c90, 0xc00bc7a8c0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:73 +0x51
- created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group)
.Start
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:71 +0x65
*
- goroutine 1894 [chan receive]:
- k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(
0xc00aa2d9e0, 0xc008e046c0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/co
ntroller.go:127 +0x34
- created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller)
.Run
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/co
ntroller.go:126 +0xa5
*
- goroutine 1895 [select]:
- k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).watchHandle
r(0xc001396f70, 0xbfdd647a30fccce1, 0x2ac764ca7, 0x71fb2a0, 0x504d020, 0xc00c87d
780, 0xc00c1efb88, 0xc00737df20, 0xc00aa2d9e0, 0x0, ...)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
flector.go:451 +0x1a5
- k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatc
h(0xc001396f70, 0xc00aa2d9e0, 0x0, 0x0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
flector.go:415 +0x657
- k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
flector.go:209 +0x38
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(
0xc0025856e0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:155 +0x5f
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00c
1efee0, 0x503b6c0, 0xc0021a3b80, 0x1, 0xc00aa2d9e0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:156 +0xad
- k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0013
96f70, 0xc00aa2d9e0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
flector.go:208 +0x196
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithC
hannel.func1()
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:56 +0x2e
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/u
*
- ==> kube-controller-manager [c8125a665e2b] <==
- internal/poll.(*pollDesc).waitRead(...)
- /usr/local/go/src/internal/poll/fd_poll_runtime.go:92
- internal/poll.(*FD).Accept(0xc00117e980, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
- /usr/local/go/src/internal/poll/fd_unix.go:394 +0x1fc
- net.(*netFD).accept(0xc00117e980, 0x203000, 0x203000, 0x45addb8)
- /usr/local/go/src/net/fd_unix.go:172 +0x45
- net.(*TCPListener).accept(0xc000561320, 0xc000312280, 0x50, 0x50)
- /usr/local/go/src/net/tcpsock_posix.go:139 +0x32
- net.(*TCPListener).Accept(0xc000561320, 0x30, 0x4067d20, 0x7f03fd5757d0, 0xc00
006a400)
- /usr/local/go/src/net/tcpsock.go:261 +0x65
- k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.tcpKeepAliveListener.Acce
pt(0x4a5c2c0, 0xc000561320, 0x7f03fd5757d0, 0x0, 0x50, 0x3f484a0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/sec
ure_serving.go:261 +0x35
- crypto/tls.(*listener).Accept(0xc000322260, 0x4067d20, 0xc0003c0360, 0x3b5f660
, 0x6a20c50)
- /usr/local/go/src/crypto/tls/tls.go:67 +0x37
- net/http.(*Server).Serve(0xc00015efc0, 0x4a45a40, 0xc000322260, 0x0, 0x0)
- /usr/local/go/src/net/http/server.go:2937 +0x266
- k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func2(0x4a5c2c0
, 0xc000561320, 0xc00015efc0, 0xc0000920c0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/sec
ure_serving.go:236 +0xe9
- created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/sec
ure_serving.go:227 +0xc8
*
- goroutine 131 [sync.Cond.Wait]:
- runtime.goparkunlock(...)
- /usr/local/go/src/runtime/proc.go:312
- sync.runtime_notifyListWait(0xc0005cd910, 0xc000000000)
- /usr/local/go/src/runtime/sema.go:513 +0xf8
- sync.(*Cond).Wait(0xc0005cd900)
- /usr/local/go/src/sync/cond.go:56 +0x9d
- k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc000bae
2a0, 0x0, 0x0, 0x390db00)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue
/queue.go:145 +0x89
- k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*Dyn
amicServingCertificateController).processNextWorkItem(0xc00117f100, 0x203000)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
amiccertificates/tlsconfig.go:263 +0x66
- k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*Dyn
amicServingCertificateController).runWorker(0xc00117f100)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
amiccertificates/tlsconfig.go:258 +0x2b
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(
0xc0002d4260)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:155 +0x5f
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000
2d4260, 0x49f9cc0, 0xc0003c00c0, 0x45ac601, 0xc0000920c0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:156 +0xad
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002
d4260, 0x3b9aca00, 0x0, 0x1, 0xc0000920c0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:133 +0x98
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0002d4260,
0x3b9aca00, 0xc0000920c0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:90 +0x4d
- created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertifi
cates.(*DynamicServingCertificateController).Run
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
amiccertificates/tlsconfig.go:247 +0x1b3
*
- goroutine 132 [select]:
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000
2d42b0, 0x49f9cc0, 0xc0003c0090, 0x45ac601, 0xc0000920c0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:167 +0x149
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002
d42b0, 0xdf8475800, 0x0, 0x1, 0xc0000920c0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:133 +0x98
- k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0002d42b0,
0xdf8475800, 0xc0000920c0)
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:90 +0x4d
- created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertifi
cates.(*DynamicServingCertificateController).Run
- /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
amiccertificates/tlsconfig.go:250 +0x22b
*
- goroutine 144 [runnable]:
- net/http.setRequestCancel.func4(0x0, 0xc0009b2120, 0xc000ed6640, 0xc000854558,
0xc000f909c0)
- /usr/local/go/src/net/http/client.go:398 +0xe5
- created by net/http.setRequestCancel
- /usr/local/go/src/net/http/client.go:397 +0x337
*
- ==> kube-scheduler [ee99e49f93a5] <==
- E1025 06:22:40.299605 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
"https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:22:49.362394 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
connect: connection refused
- E1025 06:22:50.085956 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:22:59.211131 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.1
68.99.103:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.103
:8443: connect: connection refused
- E1025 06:22:59.334235 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
&resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:23:01.170560 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
: failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
connection refused
- E1025 06:23:08.613971 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1bet
a1.PodDisruptionBudget: Get "https://192.168.99.103:8443/apis/policy/v1beta1/pod
disruptionbudgets?resourceVersion=55": dial tcp 192.168.99.103:8443: connect: co
nnection refused
- E1025 06:23:10.704068 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.Persistent
Volume: Get "https://192.168.99.103:8443/api/v1/persistentvolumes?resourceVersio
n=236": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:23:13.094051 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
al tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:23:14.560488 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192
.168.99.103:8443/api/v1/nodes?resourceVersion=312": dial tcp 192.168.99.103:8443
: connect: connection refused
- E1025 06:23:16.297333 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
.103:8443: connect: connection refused
E1025 06:23:18.655411 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.Repli
cationController: Get "https://192.168.99.103:8443/api/v1/replicationcontrollers
?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused
E1025 06:23:22.144820 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused
E1025 06:23:28.907599 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
"https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:23:38.691911 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
connect: connection refused
- E1025 06:23:43.833012 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
: failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
connection refused
- E1025 06:23:47.638908 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.1
68.99.103:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.103
:8443: connect: connection refused
- E1025 06:23:48.441555 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1bet
a1.PodDisruptionBudget: Get "https://192.168.99.103:8443/apis/policy/v1beta1/pod
disruptionbudgets?resourceVersion=55": dial tcp 192.168.99.103:8443: connect: co
nnection refused
- E1025 06:23:49.303192 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:23:49.938954 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.Persistent
Volume: Get "https://192.168.99.103:8443/api/v1/persistentvolumes?resourceVersio
n=236": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:23:52.586260 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192
.168.99.103:8443/api/v1/nodes?resourceVersion=312": dial tcp 192.168.99.103:8443
: connect: connection refused
- E1025 06:23:52.642910 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
&resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:23:53.956393 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
al tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:23:55.662306 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
.103:8443: connect: connection refused
E1025 06:24:02.117231 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused
E1025 06:24:16.592248 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.Repli
cationController: Get "https://192.168.99.103:8443/api/v1/replicationcontrollers
?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused
E1025 06:24:20.118425 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
connect: connection refused
- E1025 06:24:25.697316 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
"https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:24:26.866876 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
: failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
connection refused
- E1025 06:24:27.989228 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:24:28.257978 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192
.168.99.103:8443/api/v1/nodes?resourceVersion=312": dial tcp 192.168.99.103:8443
: connect: connection refused
E1025 06:24:34.135488 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused
E1025 06:24:34.934225 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
.103:8443: connect: connection refused
- E1025 06:24:38.423071 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
al tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:24:42.824505 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1bet
a1.PodDisruptionBudget: Get "https://192.168.99.103:8443/apis/policy/v1beta1/pod
disruptionbudgets?resourceVersion=55": dial tcp 192.168.99.103:8443: connect: co
nnection refused
- E1025 06:24:44.982234 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
&resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:24:45.226080 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.Persistent
Volume: Get "https://192.168.99.103:8443/api/v1/persistentvolumes?resourceVersio
n=236": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:24:45.755696 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.1
68.99.103:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.103
:8443: connect: connection refused
E1025 06:24:51.975378 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.Repli
cationController: Get "https://192.168.99.103:8443/api/v1/replicationcontrollers
?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused
E1025 06:24:55.474312 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
connect: connection refused
- E1025 06:24:59.943431 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
"https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:25:08.776448 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
tcp 192.168.99.103:8443: connect: connection refused
E1025 06:25:11.829217 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused
E1025 06:25:12.758816 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
: failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
connection refused
- E1025 06:25:15.261733 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
al tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:25:17.404529 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
.103:8443: connect: connection refused
- E1025 06:25:19.274860 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192
.168.99.103:8443/api/v1/nodes?resourceVersion=312": dial tcp 192.168.99.103:8443
: connect: connection refused
- E1025 06:25:23.345698 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.1
68.99.103:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.103
:8443: connect: connection refused
- E1025 06:25:25.760842 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.Persistent
Volume: Get "https://192.168.99.103:8443/api/v1/persistentvolumes?resourceVersio
n=236": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:25:26.979709 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
&resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:25:30.030411 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
"https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:25:44.713728 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1bet
a1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:k
ube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy"
at the cluster scope
- E1025 06:25:46.216233 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:25:46.462074 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
connect: connection refused
E1025 06:25:47.254601 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.Repli
cationController: Get "https://192.168.99.103:8443/api/v1/replicationcontrollers
?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused
E1025 06:26:02.757835 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
al tcp 192.168.99.103:8443: connect: connection refused
E1025 06:26:03.561440 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused
E1025 06:26:03.902390 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
&resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused
- E1025 06:26:04.818910 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
: failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
connection refused
- E1025 06:26:08.338986 1 reflector.go:127] k8s.io/client-go/informers/fac
tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
.103:8443: connect: connection refused
*
- ==> kubelet <==
- -- Logs begin at Sun 2020-10-25 06:09:38 UTC, end at Sun 2020-10-25 06:26:08 U
TC. --
- Oct 25 06:25:30 minikube kubelet[4555]: E1025 06:25:30.433962 4555 reflecto
r.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.Ru
ntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://control-plane.min
ikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?resourceVersion=309"
: dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:32 minikube kubelet[4555]: I1025 06:25:32.724568 4555 topology
_manager.go:219] [topologymanager] RemoveContainer - Container ID: 85d07903c6b7f
484cba5be26a994c56f38ad07f08ab99758dd12db3dc55530d2
- Oct 25 06:25:33 minikube kubelet[4555]: E1025 06:25:33.202349 4555 controll
er.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "htt
ps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces
/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: con
nect: connection refused
- Oct 25 06:25:33 minikube kubelet[4555]: E1025 06:25:33.457264 4555 event.go
:273] Unable to write event: 'Patch "https://control-plane.minikube.internal:844
3/api/v1/namespaces/kube-system/events/kube-controller-manager-minikube.16412787
4bb183aa": dial tcp 192.168.99.103:8443: connect: connection refused' (may retry
after sleeping)
- Oct 25 06:25:38 minikube kubelet[4555]: I1025 06:25:38.727143 4555 topology
_manager.go:219] [topologymanager] RemoveContainer - Container ID: 59017094cef03
9db98b55fe1ae6b25764ec9d54633357cf4d8e47aa771e477cf
- Oct 25 06:25:44 minikube kubelet[4555]: W1025 06:25:44.178623 4555 status_m
anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
(b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": net/http: TL
S handshake timeout
- Oct 25 06:25:46 minikube kubelet[4555]: W1025 06:25:46.037570 4555 reflecto
r.go:424] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: watch of *v1.Node ended
with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Unexpected
watch close - watch lasted less than a second and no items received
- Oct 25 06:25:46 minikube kubelet[4555]: W1025 06:25:46.726623 4555 status_m
anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
(b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
.168.99.103:8443: connect: connection refused
- Oct 25 06:25:46 minikube kubelet[4555]: W1025 06:25:46.729178 4555 status_m
anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
ube": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:46 minikube kubelet[4555]: W1025 06:25:46.729452 4555 status_m
anager.go:550] Failed to get status for pod "kube-scheduler-minikube_kube-system
(ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.interna
l:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube": dial tcp 192
.168.99.103:8443: connect: connection refused
- Oct 25 06:25:47 minikube kubelet[4555]: I1025 06:25:47.391298 4555 topology
_manager.go:219] [topologymanager] RemoveContainer - Container ID: 85d07903c6b7f
484cba5be26a994c56f38ad07f08ab99758dd12db3dc55530d2
- Oct 25 06:25:47 minikube kubelet[4555]: I1025 06:25:47.392919 4555 topology
_manager.go:219] [topologymanager] RemoveContainer - Container ID: ec029b977f032
85e6d4b2256243c079e03797a57f639a34c152a38247aa8c6b5
- Oct 25 06:25:47 minikube kubelet[4555]: E1025 06:25:47.394527 4555 pod_work
ers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserver-
minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to "S
tartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restar
ting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1e
f52506bd93c04ce27fa412a22c055)"
- Oct 25 06:25:47 minikube kubelet[4555]: W1025 06:25:47.399875 4555 status_m
anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
(b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
.168.99.103:8443: connect: connection refused
- Oct 25 06:25:50 minikube kubelet[4555]: E1025 06:25:50.719404 4555 reflecto
r.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service
: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/
api/v1/services?resourceVersion=215": dial tcp 192.168.99.103:8443: connect: con
nection refused
- Oct 25 06:25:51 minikube kubelet[4555]: I1025 06:25:51.698651 4555 topology
_manager.go:219] [topologymanager] RemoveContainer - Container ID: ec029b977f032
85e6d4b2256243c079e03797a57f639a34c152a38247aa8c6b5
- Oct 25 06:25:51 minikube kubelet[4555]: W1025 06:25:51.701468 4555 status_m
anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
(b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
.168.99.103:8443: connect: connection refused
- Oct 25 06:25:51 minikube kubelet[4555]: E1025 06:25:51.703557 4555 pod_work
ers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserver-
minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to "S
tartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restar
ting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1e
f52506bd93c04ce27fa412a22c055)"
- Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.521398 4555 reflecto
r.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch
*v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:84
43/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&resourceVersion=287": dial
tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.939160 4555 controll
er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.945234 4555 controll
er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.947527 4555 controll
er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.950514 4555 controll
er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.951087 4555 controll
er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:54 minikube kubelet[4555]: I1025 06:25:54.951240 4555 controll
er.go:106] failed to update lease using latest lease, fallback to ensure lease,
err: failed 5 attempts to update node lease
- Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.952183 4555 controll
er.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get "
https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespa
ces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443:
connect: connection refused
- Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.164520 4555 controll
er.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get "
https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespa
ces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443:
connect: connection refused
- Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.494321 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?resourceVersion=0&timeout=10s": dial tcp 192.168.99.103:8443: connect: connec
tion refused
- Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.496514 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.497428 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.498167 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.498668 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.498993 4555 kubelet_
node_status.go:429] Unable to update node status: update node status exceeds ret
ry count
- Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.567008 4555 controll
er.go:136] failed to ensure node lease exists, will retry in 800ms, error: Get "
https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespa
ces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443:
connect: connection refused
- Oct 25 06:25:55 minikube kubelet[4555]: I1025 06:25:55.609417 4555 topology
_manager.go:219] [topologymanager] RemoveContainer - Container ID: 59017094cef03
9db98b55fe1ae6b25764ec9d54633357cf4d8e47aa771e477cf
- Oct 25 06:25:55 minikube kubelet[4555]: W1025 06:25:55.614550 4555 status_m
anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
ube": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:55 minikube kubelet[4555]: I1025 06:25:55.621599 4555 topology
_manager.go:219] [topologymanager] RemoveContainer - Container ID: c8125a665e2bd
3705eb5a7736ff5804d41ee8b683d8d14c45d3941dcb9c2f5ba
- Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.634199 4555 pod_work
ers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controller
-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: fai
led to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "ba
ck-off 5m0s restarting failed container=kube-controller-manager pod=kube-control
ler-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:25:56 minikube kubelet[4555]: E1025 06:25:56.370632 4555 controll
er.go:136] failed to ensure node lease exists, will retry in 1.6s, error: Get "h
ttps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespac
es/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: c
onnect: connection refused
- Oct 25 06:25:56 minikube kubelet[4555]: W1025 06:25:56.721888 4555 status_m
anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
(b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
.168.99.103:8443: connect: connection refused
- Oct 25 06:25:56 minikube kubelet[4555]: W1025 06:25:56.723041 4555 status_m
anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
ube": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:25:56 minikube kubelet[4555]: W1025 06:25:56.723386 4555 status_m
anager.go:550] Failed to get status for pod "kube-scheduler-minikube_kube-system
(ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.interna
l:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube": dial tcp 192
.168.99.103:8443: connect: connection refused
- Oct 25 06:25:57 minikube kubelet[4555]: E1025 06:25:57.977207 4555 controll
er.go:136] failed to ensure node lease exists, will retry in 3.2s, error: Get "h
ttps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespac
es/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: c
onnect: connection refused
- Oct 25 06:26:01 minikube kubelet[4555]: E1025 06:26:01.185035 4555 controll
er.go:136] failed to ensure node lease exists, will retry in 6.4s, error: Get "h
ttps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespac
es/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: c
onnect: connection refused
- Oct 25 06:26:03 minikube kubelet[4555]: I1025 06:26:03.631421 4555 topology
_manager.go:219] [topologymanager] RemoveContainer - Container ID: c8125a665e2bd
3705eb5a7736ff5804d41ee8b683d8d14c45d3941dcb9c2f5ba
- Oct 25 06:26:03 minikube kubelet[4555]: W1025 06:26:03.631405 4555 status_m
anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
ube": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:26:03 minikube kubelet[4555]: E1025 06:26:03.636478 4555 pod_work
ers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controller
-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: fai
led to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "ba
ck-off 5m0s restarting failed container=kube-controller-manager pod=kube-control
ler-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
- Oct 25 06:26:03 minikube kubelet[4555]: I1025 06:26:03.722869 4555 topology
_manager.go:219] [topologymanager] RemoveContainer - Container ID: ec029b977f032
85e6d4b2256243c079e03797a57f639a34c152a38247aa8c6b5
- Oct 25 06:26:03 minikube kubelet[4555]: E1025 06:26:03.725899 4555 pod_work
ers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserver-
minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to "S
tartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restar
ting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1e
f52506bd93c04ce27fa412a22c055)"
- Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.502419 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?resourceVersion=0&timeout=10s": dial tcp 192.168.99.103:8443: connect: connec
tion refused
- Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.519703 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.530415 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.531654 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.535727 4555 kubelet_
node_status.go:442] Error updating node status, will retry: error getting node "
minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.546034 4555 kubelet_
node_status.go:429] Unable to update node status: update node status exceeds ret
ry count
- Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.735740 4555 reflecto
r.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.Ru
ntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://control-plane.min
ikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?resourceVersion=309"
: dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:26:06 minikube kubelet[4555]: W1025 06:26:06.722659 4555 status_m
anager.go:550] Failed to get status for pod "kube-scheduler-minikube_kube-system
(ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.interna
l:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube": dial tcp 192
.168.99.103:8443: connect: connection refused
- Oct 25 06:26:06 minikube kubelet[4555]: W1025 06:26:06.730870 4555 status_m
anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
(b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
.168.99.103:8443: connect: connection refused
- Oct 25 06:26:06 minikube kubelet[4555]: W1025 06:26:06.753352 4555 status_m
anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
ube": dial tcp 192.168.99.103:8443: connect: connection refused
- Oct 25 06:26:07 minikube kubelet[4555]: E1025 06:26:07.593068 4555 controll
er.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "htt
ps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces
/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: con
nect: connection refused
! unable to fetch logs for: describe nodes
prabs@LAPTOP-HQ5LK73I ~
$
Most helpful comment
minikube is constanty crashing. Please give some solutions.