Minikube: gcp-auth: Enabling storage-provisioner: failed calling webhook "gcp-auth-mutate.k8s.io"

Created on 9 Sep 2020  ยท  6Comments  ยท  Source: kubernetes/minikube


Steps to reproduce the issue:

Run "/Users/michihara/Library/Application Support/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/bin/minikube" start --profile cloud-run-dev-internal --keep-context true --wait false --vm-driver docker --interactive false --delete-on-failure --addons gcp-auth

This results in a failure. I started seeing this when I added the --addons gcp-auth flag to the command, although when I try to re-run this command without --addons gcp-auth, I still see the same failure.

Full output of minikube start command used, if not already included:

$ "/Users/michihara/Library/Application Support/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/bin/minikube" start --profile cloud-run-dev-internal --keep-context true --wait false --vm-driver docker --interactive false --delete-on-failure --addons gcp-auth

๐Ÿ˜„  [cloud-run-dev-internal] minikube v1.12.3 on Darwin 10.15.6
โœจ  Using the docker driver based on existing profile
โ—  Requested memory allocation (1991MB) is less than the recommended minimum 2000MB. Kubernetes may crash unexpectedly.
โ—  Your system has 16384MB memory but Docker has only 1991MB. For a better performance increase to at least 3GB.

    Docker for Desktop  > Settings > Resources > Memory


๐Ÿ‘  Starting control plane node cloud-run-dev-internal in cluster cloud-run-dev-internal
๐Ÿƒ  Updating the running docker "cloud-run-dev-internal" container ...
๐Ÿณ  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
๐Ÿ”Ž  Verifying Kubernetes components...
๐Ÿ”Ž  Verifying gcp-auth addon...
โ—  Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
serviceaccount/storage-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
endpoints/k8s.io-minikube-hostpath unchanged

stderr:
Error from server (InternalError): error when applying patch:
{"spec":{"$setElementOrder/containers":[{"name":"storage-provisioner"}],"$setElementOrder/volumes":[{"name":"tmp"}],"containers":[{"$setElementOrder/volumeMounts":[{"mountPath":"/tmp"}],"name":"storage-provisioner"}]}}
to:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
for: "/etc/kubernetes/addons/storage-provisioner.yaml": Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
]
๐Ÿ“Œ  Your GCP credentials will now be mounted into every pod created in the cloud-run-dev-internal cluster.
๐Ÿ“Œ  If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
โ—  Enabling 'gcp-auth' returned an error: running callbacks: [verifying gcp-auth addon pods : timed out waiting for the condition: timed out waiting for the condition]
๐ŸŒŸ  Enabled addons: default-storageclass, gcp-auth, storage-provisioner
๐Ÿ’—  To connect to this cluster, use: kubectl --context=cloud-run-dev-internal

Optional: Full output of minikube logs command:

==> Docker <==
-- Logs begin at Wed 2020-09-09 17:51:55 UTC, end at Wed 2020-09-09 18:13:09 UTC. --
Sep 09 17:51:55 cloud-run-dev-internal systemd[1]: Starting Docker Application Container Engine...
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.772879021Z" level=info msg="Starting up"
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.777206374Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.777250557Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.777272541Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.777282051Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.779406514Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.779525337Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.779577190Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.779631298Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.801285500Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Sep 09 17:51:55 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:55.884613183Z" level=info msg="Loading containers: start."
Sep 09 17:51:56 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:56.043574047Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Sep 09 17:51:56 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:56.093468213Z" level=info msg="Loading containers: done."
Sep 09 17:51:56 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:56.117885049Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Sep 09 17:51:56 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:56.119705120Z" level=info msg="Daemon has completed initialization"
Sep 09 17:51:56 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:56.200297939Z" level=info msg="API listen on /var/run/docker.sock"
Sep 09 17:51:56 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:51:56.200331568Z" level=info msg="API listen on [::]:2376"
Sep 09 17:51:56 cloud-run-dev-internal systemd[1]: Started Docker Application Container Engine.
Sep 09 17:52:10 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:52:10.975115830Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 09 17:55:16 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:55:16.156738764Z" level=info msg="Layer sha256:c06924dab160bd7732dfb22a7942ae0dfd9cace76002f4ed6032920f8bc6d30c cleaned up"
Sep 09 17:55:44 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:55:44.895530431Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 09 17:55:46 cloud-run-dev-internal dockerd[150]: time="2020-09-09T17:55:46.474441376Z" level=info msg="Layer sha256:c78a8a0bd0ddadc6a956f556740ed58db1b767bf476d38bc2e20e97f1a24c7ba cleaned up"

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
615ce34eb29cb       9c3ca9f065bb1       20 minutes ago      Running             storage-provisioner       2                   d88d4a6c6bd7a
1205dbadadd3e       67da37a9a360e       20 minutes ago      Running             coredns                   1                   60c22db70857a
f51984c1c831d       3439b7546f29b       21 minutes ago      Running             kube-proxy                1                   d506e30c6e1a2
5e7b6ab30497c       9c3ca9f065bb1       21 minutes ago      Exited              storage-provisioner       1                   d88d4a6c6bd7a
2317bced5eb31       303ce5db0e90d       21 minutes ago      Running             etcd                      0                   9c48b825bcc04
ae8f363d05d6f       76216c34ed0c7       21 minutes ago      Running             kube-scheduler            1                   9f69596749194
38dd6c56ff8fe       da26705ccb4b5       21 minutes ago      Running             kube-controller-manager   1                   bfe04d2a39a65
c81ce0a29a6e4       7e28efa976bd1       21 minutes ago      Running             kube-apiserver            0                   c2c8711c0b346
f613f49803dc1       67da37a9a360e       57 minutes ago      Exited              coredns                   0                   19061c7908c2e
b1dfee41b79b7       3439b7546f29b       57 minutes ago      Exited              kube-proxy                0                   d0b691508b403
f291ea4801070       da26705ccb4b5       57 minutes ago      Exited              kube-controller-manager   0                   cf48984d07c65
5036fe59be26e       76216c34ed0c7       57 minutes ago      Exited              kube-scheduler            0                   4d44283e30c55

==> coredns [1205dbadadd3] <==
E0909 17:52:11.562523       1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: x509: certificate signed by unknown authority
E0909 17:52:11.567006       1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: x509: certificate signed by unknown authority
E0909 17:52:11.567136       1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: x509: certificate signed by unknown authority
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> coredns [f613f49803dc] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

==> describe nodes <==
Name:               cloud-run-dev-internal
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=cloud-run-dev-internal
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=2243b4b97c131e3244c5f014faedca0d846599f5-dirty
                    minikube.k8s.io/name=cloud-run-dev-internal
                    minikube.k8s.io/updated_at=2020_09_09T10_15_43_0700
                    minikube.k8s.io/version=v1.12.3
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 09 Sep 2020 17:15:39 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  cloud-run-dev-internal
  AcquireTime:     <unset>
  RenewTime:       Wed, 09 Sep 2020 18:13:01 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 09 Sep 2020 18:11:23 +0000   Wed, 09 Sep 2020 18:01:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 09 Sep 2020 18:11:23 +0000   Wed, 09 Sep 2020 18:01:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 09 Sep 2020 18:11:23 +0000   Wed, 09 Sep 2020 18:01:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 09 Sep 2020 18:11:23 +0000   Wed, 09 Sep 2020 18:01:21 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.17.0.2
  Hostname:    cloud-run-dev-internal
Capacity:
  cpu:                4
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             2038904Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             2038904Ki
  pods:               110
System Info:
  Machine ID:                 142d1849d89243bcb6e89a1ac9adb450
  System UUID:                4da96e3b-f781-4852-a1f7-d4a3c0e8f8ef
  Boot ID:                    e60a93a6-6d4b-4096-b6c7-ea1adf1f03aa
  Kernel Version:             4.19.76-linuxkit
  OS Image:                   Ubuntu 20.04 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.8
  Kubelet Version:            v1.18.3
  Kube-Proxy Version:         v1.18.3
Non-terminated Pods:          (7 in total)
  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-66bff467f8-747k6                          100m (2%)     0 (0%)      70Mi (3%)        170Mi (8%)     57m
  kube-system                 etcd-cloud-run-dev-internal                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
  kube-system                 kube-apiserver-cloud-run-dev-internal             250m (6%)     0 (0%)      0 (0%)           0 (0%)         21m
  kube-system                 kube-controller-manager-cloud-run-dev-internal    200m (5%)     0 (0%)      0 (0%)           0 (0%)         57m
  kube-system                 kube-proxy-q2swm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         57m
  kube-system                 kube-scheduler-cloud-run-dev-internal             100m (2%)     0 (0%)      0 (0%)           0 (0%)         57m
  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         57m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (16%)  0 (0%)
  memory             70Mi (3%)   170Mi (8%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                From                                Message
  ----     ------                   ----               ----                                -------
  Normal   Starting                 57m                kubelet, cloud-run-dev-internal     Starting kubelet.
  Normal   NodeHasSufficientMemory  57m                kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    57m                kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     57m                kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasSufficientPID
  Normal   NodeNotReady             57m                kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeNotReady
  Normal   NodeAllocatableEnforced  57m                kubelet, cloud-run-dev-internal     Updated Node Allocatable limit across pods
  Warning  readOnlySysFS            57m                kube-proxy, cloud-run-dev-internal  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
  Normal   Starting                 57m                kube-proxy, cloud-run-dev-internal  Starting kube-proxy.
  Normal   NodeReady                57m                kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeReady
  Normal   Starting                 24m                kubelet, cloud-run-dev-internal     Starting kubelet.
  Normal   NodeHasSufficientMemory  24m (x2 over 24m)  kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     24m (x2 over 24m)  kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasSufficientPID
  Normal   NodeNotReady             24m                kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeNotReady
  Normal   NodeAllocatableEnforced  24m                kubelet, cloud-run-dev-internal     Updated Node Allocatable limit across pods
  Normal   NodeReady                24m                kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeReady
  Normal   Starting                 21m                kubelet, cloud-run-dev-internal     Starting kubelet.
  Normal   NodeAllocatableEnforced  21m                kubelet, cloud-run-dev-internal     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  21m (x8 over 21m)  kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasSufficientPID
  Warning  readOnlySysFS            20m                kube-proxy, cloud-run-dev-internal  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
  Normal   Starting                 20m                kube-proxy, cloud-run-dev-internal  Starting kube-proxy.
  Normal   Starting                 12m                kubelet, cloud-run-dev-internal     Starting kubelet.
  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeHasSufficientPID
  Normal   NodeNotReady             11m                kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeNotReady
  Normal   NodeAllocatableEnforced  11m                kubelet, cloud-run-dev-internal     Updated Node Allocatable limit across pods
  Normal   NodeReady                11m                kubelet, cloud-run-dev-internal     Node cloud-run-dev-internal status is now: NodeReady

==> dmesg <==
[Sep 9 16:27] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A
[  +0.000773] virtio-pci 0000:00:01.0: PCI INT A: no GSI
[  +0.001594] virtio-pci 0000:00:02.0: can't derive routing for PCI INT A
[  +0.000779] virtio-pci 0000:00:02.0: PCI INT A: no GSI
[  +0.002486] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
[  +0.000940] virtio-pci 0000:00:07.0: PCI INT A: no GSI
[  +0.050474] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[  +0.602174] i8042: Can't read CTR while initializing i8042
[  +0.000687] i8042: probe of i8042 failed with error -5
[  +0.009008] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[  +0.000902] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[  +0.156977] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[  +0.020397] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[  +3.056695] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[  +0.076983] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Sep 9 17:44] hrtimer: interrupt took 6045936 ns

==> etcd [2317bced5eb3] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-09-09 17:52:04.184399 I | etcdmain: etcd Version: 3.4.3
2020-09-09 17:52:04.184590 I | etcdmain: Git SHA: 3cf2f69b5
2020-09-09 17:52:04.184593 I | etcdmain: Go Version: go1.12.12
2020-09-09 17:52:04.184595 I | etcdmain: Go OS/Arch: linux/amd64
2020-09-09 17:52:04.184601 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2020-09-09 17:52:04.184645 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-09-09 17:52:04.184701 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-09-09 17:52:04.225996 I | embed: name = cloud-run-dev-internal
2020-09-09 17:52:04.226013 I | embed: data dir = /var/lib/minikube/etcd
2020-09-09 17:52:04.226018 I | embed: member dir = /var/lib/minikube/etcd/member
2020-09-09 17:52:04.226020 I | embed: heartbeat = 100ms
2020-09-09 17:52:04.226030 I | embed: election = 1000ms
2020-09-09 17:52:04.226033 I | embed: snapshot count = 10000
2020-09-09 17:52:04.226138 I | embed: advertise client URLs = https://172.17.0.2:2379
2020-09-09 17:52:04.226148 I | embed: initial advertise peer URLs = https://172.17.0.2:2380
2020-09-09 17:52:04.226152 I | embed: initial cluster = 
2020-09-09 17:52:04.251624 I | etcdserver: restarting member b273bc7741bcb020 in cluster 86482fea2286a1d2 at commit index 741
raft2020/09/09 17:52:04 INFO: b273bc7741bcb020 switched to configuration voters=()
raft2020/09/09 17:52:04 INFO: b273bc7741bcb020 became follower at term 2
raft2020/09/09 17:52:04 INFO: newRaft b273bc7741bcb020 [peers: [], term: 2, commit: 741, applied: 0, lastindex: 741, lastterm: 2]
2020-09-09 17:52:04.268058 W | auth: simple token is not cryptographically signed
2020-09-09 17:52:04.269630 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-09-09 17:52:04.271715 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-09-09 17:52:04.271957 I | embed: listening for metrics on http://127.0.0.1:2381
2020-09-09 17:52:04.273047 I | embed: listening for peers on 172.17.0.2:2380
raft2020/09/09 17:52:04 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-09-09 17:52:04.276369 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
2020-09-09 17:52:04.276588 N | etcdserver/membership: set the initial cluster version to 3.4
2020-09-09 17:52:04.276821 I | etcdserver/api: enabled capabilities for version 3.4
raft2020/09/09 17:52:05 INFO: b273bc7741bcb020 is starting a new election at term 2
raft2020/09/09 17:52:05 INFO: b273bc7741bcb020 became candidate at term 3
raft2020/09/09 17:52:05 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 3
raft2020/09/09 17:52:05 INFO: b273bc7741bcb020 became leader at term 3
raft2020/09/09 17:52:05 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 3
2020-09-09 17:52:05.255202 I | etcdserver: published {Name:cloud-run-dev-internal ClientURLs:[https://172.17.0.2:2379]} to cluster 86482fea2286a1d2
2020-09-09 17:52:05.255371 I | embed: ready to serve client requests
2020-09-09 17:52:05.255584 I | embed: ready to serve client requests
2020-09-09 17:52:05.256857 I | embed: serving client requests on 127.0.0.1:2379
2020-09-09 17:52:05.257110 I | embed: serving client requests on 172.17.0.2:2379
2020-09-09 18:02:05.305464 I | mvcc: store.index: compact 1033
2020-09-09 18:02:05.367382 I | mvcc: finished scheduled compaction at 1033 (took 59.59735ms)
2020-09-09 18:07:05.319739 I | mvcc: store.index: compact 1152
2020-09-09 18:07:05.320829 I | mvcc: finished scheduled compaction at 1152 (took 719.647ยตs)
2020-09-09 18:12:05.337078 I | mvcc: store.index: compact 1367
2020-09-09 18:12:05.340275 I | mvcc: finished scheduled compaction at 1367 (took 2.246291ms)

==> kernel <==
 18:13:14 up  1:45,  0 users,  load average: 2.20, 1.00, 0.75
Linux cloud-run-dev-internal 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04 LTS"

==> kube-apiserver [c81ce0a29a6e] <==
W0909 18:08:29.463989       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:08:29.464063       1 trace.go:116] Trace[2142357700]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-09-09 18:08:27.398329322 +0000 UTC m=+983.574628828) (total time: 2.065723438s):
Trace[2142357700]: [2.065723438s] [2.065665857s] END
I0909 18:08:29.464258       1 trace.go:116] Trace[1828453244]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-09 18:08:27.398206355 +0000 UTC m=+983.574505860) (total time: 2.066035668s):
Trace[1828453244]: [1.045092648s] [1.044146961s] About to apply patch
Trace[1828453244]: [2.066035668s] [1.019265069s] END
I0909 18:08:45.272825       1 trace.go:116] Trace[668199649]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:2a458682-0330-4847-b5cd-4411cfb4aa3e (started: 2020-09-09 18:08:44.263190246 +0000 UTC m=+1000.439489750) (total time: 1.009439118s):
Trace[668199649]: [1.009439118s] [1.009439118s] END
W0909 18:08:45.272939       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:08:46.296969       1 trace.go:116] Trace[676419189]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:079ad47b-a22f-4792-946e-9257013a523e (started: 2020-09-09 18:08:45.278048219 +0000 UTC m=+1001.454347764) (total time: 1.018892372s):
Trace[676419189]: [1.018892372s] [1.018892372s] END
W0909 18:08:46.297102       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:08:46.297325       1 trace.go:116] Trace[495050102]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-09-09 18:08:44.262214982 +0000 UTC m=+1000.438514488) (total time: 2.035059782s):
Trace[495050102]: [2.035059782s] [2.03500058s] END
I0909 18:08:46.297715       1 trace.go:116] Trace[1177668966]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-09 18:08:44.262118113 +0000 UTC m=+1000.438417617) (total time: 2.035500134s):
Trace[1177668966]: [1.014285231s] [1.013276836s] About to apply patch
Trace[1177668966]: [2.035500134s] [1.019710989s] END
I0909 18:08:52.440421       1 trace.go:116] Trace[361183072]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:d24fa6a2-367a-46be-b0ca-d957dd45ad14 (started: 2020-09-09 18:08:51.429821973 +0000 UTC m=+1007.606121491) (total time: 1.010522686s):
Trace[361183072]: [1.010522686s] [1.010522686s] END
W0909 18:08:52.440551       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:08:52.441113       1 trace.go:116] Trace[1475368784]: "Create" url:/api/v1/namespaces/gcp-auth/pods,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:job-controller,client:172.17.0.2 (started: 2020-09-09 18:08:51.424612843 +0000 UTC m=+1007.600912377) (total time: 1.016447228s):
Trace[1475368784]: [1.016447228s] [1.014073417s] END
I0909 18:08:52.505182       1 trace.go:116] Trace[13783482]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:d21a9f5f-9669-4d38-b946-cac1598f213f (started: 2020-09-09 18:08:51.488158843 +0000 UTC m=+1007.664458439) (total time: 1.016945384s):
Trace[13783482]: [1.016945384s] [1.016945384s] END
W0909 18:08:52.505794       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:08:52.506834       1 trace.go:116] Trace[1924212045]: "Create" url:/api/v1/namespaces/gcp-auth/pods,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:job-controller,client:172.17.0.2 (started: 2020-09-09 18:08:51.486679617 +0000 UTC m=+1007.662979155) (total time: 1.020123071s):
Trace[1924212045]: [1.020123071s] [1.019812278s] END
I0909 18:09:15.864901       1 trace.go:116] Trace[449425093]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:3eeff7cd-4dd6-4dda-bf68-c79a6197cb5f (started: 2020-09-09 18:09:14.857592606 +0000 UTC m=+1031.033265282) (total time: 1.007240356s):
Trace[449425093]: [1.007240356s] [1.007240356s] END
W0909 18:09:15.865588       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:09:16.888939       1 trace.go:116] Trace[1462484422]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:eb2d4c43-01b2-4e65-ab8f-b6ff0e43a176 (started: 2020-09-09 18:09:15.869370271 +0000 UTC m=+1032.045042957) (total time: 1.019443421s):
Trace[1462484422]: [1.019443421s] [1.019443421s] END
W0909 18:09:16.889620       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:09:16.889951       1 trace.go:116] Trace[451289055]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-09-09 18:09:14.856623607 +0000 UTC m=+1031.032296285) (total time: 2.03330375s):
Trace[451289055]: [2.03330375s] [2.033237649s] END
I0909 18:09:16.890713       1 trace.go:116] Trace[977801523]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-09 18:09:14.85651676 +0000 UTC m=+1031.032189437) (total time: 2.03416869s):
Trace[977801523]: [1.011493391s] [1.010516538s] About to apply patch
Trace[977801523]: [2.03416869s] [1.021438912s] END
I0909 18:09:41.273214       1 trace.go:116] Trace[1478241924]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:a87b8719-ad4f-4c6a-a332-7057894b60aa (started: 2020-09-09 18:09:40.233781333 +0000 UTC m=+1056.408839661) (total time: 1.03939917s):
Trace[1478241924]: [1.03939917s] [1.03939917s] END
W0909 18:09:41.273279       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:09:42.297651       1 trace.go:116] Trace[378230803]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:afede3e4-f7ee-471d-9e75-0a9552cfc2ed (started: 2020-09-09 18:09:41.275341649 +0000 UTC m=+1057.450399979) (total time: 1.022248432s):
Trace[378230803]: [1.022248432s] [1.022248432s] END
W0909 18:09:42.297800       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:09:42.298210       1 trace.go:116] Trace[171299863]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-09-09 18:09:40.232610993 +0000 UTC m=+1056.407669355) (total time: 2.065577426s):
Trace[171299863]: [2.065577426s] [2.065416269s] END
I0909 18:09:42.298688       1 trace.go:116] Trace[525198367]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-09 18:09:40.232274162 +0000 UTC m=+1056.407332506) (total time: 2.066388043s):
Trace[525198367]: [1.042059487s] [1.040631707s] About to apply patch
Trace[525198367]: [2.066388043s] [1.023390402s] END
I0909 18:10:26.330685       1 trace.go:116] Trace[1247413657]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:04cfac2a-6491-4fac-9d98-6531d40a85f5 (started: 2020-09-09 18:10:25.297306283 +0000 UTC m=+1101.471034122) (total time: 1.033331248s):
Trace[1247413657]: [1.033331248s] [1.033331248s] END
W0909 18:10:26.330895       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:10:27.355139       1 trace.go:116] Trace[765956509]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:ce2e80b5-3494-47e1-b0e0-d164c544b464 (started: 2020-09-09 18:10:26.333673245 +0000 UTC m=+1102.507401094) (total time: 1.021375978s):
Trace[765956509]: [1.021375978s] [1.021375978s] END
W0909 18:10:27.355490       1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:10:27.355720       1 trace.go:116] Trace[257801549]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-09-09 18:10:25.296268632 +0000 UTC m=+1101.469996507) (total time: 2.059426209s):
Trace[257801549]: [2.059426209s] [2.059346226s] END
I0909 18:10:27.356446       1 trace.go:116] Trace[1193929116]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-09 18:10:25.294797711 +0000 UTC m=+1101.468525553) (total time: 2.06161345s):
Trace[1193929116]: [1.037773719s] [1.035366906s] About to apply patch
Trace[1193929116]: [2.06161345s] [1.022815953s] END

==> kube-controller-manager [38dd6c56ff8f] <==
E0909 17:54:49.402356       1 job_controller.go:398] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:54:49.402732       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"gcp-auth", Name:"gcp-auth-certs-create", UID:"c2eb0699-13fc-40e1-a40e-9407d6446727", APIVersion:"batch/v1", ResourceVersion:"708", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:09.560089       1 replica_set.go:535] sync "gcp-auth/gcp-auth-6c87ddc68d" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:09.560389       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"gcp-auth", Name:"gcp-auth-6c87ddc68d", UID:"ce89008a-7c49-4c8d-a553-92954ea42db6", APIVersion:"apps/v1", ResourceVersion:"760", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:48.015069       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"gh-app", UID:"dcc7b856-113a-4580-ae22-f1231da85c49", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set gh-app-769cc5b7bb to 1
I0909 17:55:49.052518       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"956", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:49.057449       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:50.073052       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:50.073123       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:51.096908       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:51.097066       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:52.121572       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:52.121686       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:53.145141       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:53.145340       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:54.171577       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:54.171595       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:55.193566       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:55.193946       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:56.217037       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:56.217526       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:57.241439       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:57.241779       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:58.265994       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:58.266377       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:55:59.546967       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:55:59.547040       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:56:05.690707       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:56:05.690940       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:56:16.953537       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:56:16.953572       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:56:38.457529       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:56:38.457781       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:57:20.442949       1 replica_set.go:535] sync "default/gh-app-769cc5b7bb" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:57:20.443024       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-769cc5b7bb", UID:"5c28a191-b7c4-430b-9626-d947fa36d855", APIVersion:"apps/v1", ResourceVersion:"965", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:57:30.362999       1 job_controller.go:793] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:57:30.363015       1 job_controller.go:398] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:57:30.363051       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"gcp-auth", Name:"gcp-auth-certs-patch", UID:"67aa6479-dfd8-44c6-9409-e48b77e0436a", APIVersion:"batch/v1", ResourceVersion:"711", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:57:30.427141       1 job_controller.go:793] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 17:57:30.427209       1 job_controller.go:398] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 17:57:30.427418       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"gcp-auth", Name:"gcp-auth-certs-create", UID:"c2eb0699-13fc-40e1-a40e-9407d6446727", APIVersion:"batch/v1", ResourceVersion:"708", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:01:05.899660       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"cloud-run-dev-internal", UID:"33d04963-153f-4f6c-af22-2244b80a1cd1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node cloud-run-dev-internal status is now: NodeNotReady
I0909 18:01:06.055163       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
E0909 18:01:07.024413       1 replica_set.go:535] sync "gcp-auth/gcp-auth-6c87ddc68d" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:01:07.024500       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"gcp-auth", Name:"gcp-auth-6c87ddc68d", UID:"ce89008a-7c49-4c8d-a553-92954ea42db6", APIVersion:"apps/v1", ResourceVersion:"760", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:01:26.058674       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
E0909 18:02:51.411174       1 job_controller.go:793] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 18:02:51.411328       1 job_controller.go:398] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:02:51.411702       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"gcp-auth", Name:"gcp-auth-certs-patch", UID:"67aa6479-dfd8-44c6-9409-e48b77e0436a", APIVersion:"batch/v1", ResourceVersion:"711", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 18:02:51.474255       1 job_controller.go:793] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 18:02:51.474313       1 job_controller.go:398] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:02:51.474560       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"gcp-auth", Name:"gcp-auth-certs-create", UID:"c2eb0699-13fc-40e1-a40e-9407d6446727", APIVersion:"batch/v1", ResourceVersion:"708", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 18:06:35.736494       1 replica_set.go:535] sync "gcp-auth/gcp-auth-6c87ddc68d" failed with Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:06:35.736557       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"gcp-auth", Name:"gcp-auth-6c87ddc68d", UID:"ce89008a-7c49-4c8d-a553-92954ea42db6", APIVersion:"apps/v1", ResourceVersion:"760", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 18:08:52.442532       1 job_controller.go:793] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 18:08:52.442589       1 job_controller.go:398] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:08:52.442946       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"gcp-auth", Name:"gcp-auth-certs-patch", UID:"67aa6479-dfd8-44c6-9409-e48b77e0436a", APIVersion:"batch/v1", ResourceVersion:"711", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 18:08:52.508011       1 job_controller.go:793] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
E0909 18:08:52.508048       1 job_controller.go:398] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
I0909 18:08:52.508207       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"gcp-auth", Name:"gcp-auth-certs-create", UID:"c2eb0699-13fc-40e1-a40e-9407d6446727", APIVersion:"batch/v1", ResourceVersion:"708", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused

==> kube-controller-manager [f291ea480107] <==
I0909 17:15:48.275888       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I0909 17:15:48.275894       1 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator
I0909 17:15:48.527877       1 controllermanager.go:533] Started "endpointslice"
I0909 17:15:48.527930       1 endpointslice_controller.go:213] Starting endpoint slice controller
I0909 17:15:48.528104       1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice
I0909 17:15:48.775766       1 controllermanager.go:533] Started "deployment"
I0909 17:15:48.775826       1 deployment_controller.go:153] Starting deployment controller
I0909 17:15:48.775833       1 shared_informer.go:223] Waiting for caches to sync for deployment
W0909 17:15:48.775847       1 controllermanager.go:525] Skipping "nodeipam"
I0909 17:15:48.778543       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
W0909 17:15:48.780248       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="cloud-run-dev-internal" does not exist
I0909 17:15:48.828953       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
I0909 17:15:48.870582       1 shared_informer.go:230] Caches are synced for TTL 
I0909 17:15:48.877540       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
I0909 17:15:48.877908       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
I0909 17:15:48.896731       1 shared_informer.go:230] Caches are synced for service account 
I0909 17:15:48.912316       1 shared_informer.go:230] Caches are synced for namespace 
I0909 17:15:49.076098       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
E0909 17:15:49.096312       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0909 17:15:49.376270       1 shared_informer.go:230] Caches are synced for deployment 
I0909 17:15:49.377397       1 shared_informer.go:230] Caches are synced for PVC protection 
I0909 17:15:49.378490       1 shared_informer.go:230] Caches are synced for resource quota 
I0909 17:15:49.378559       1 shared_informer.go:230] Caches are synced for daemon sets 
I0909 17:15:49.379298       1 shared_informer.go:230] Caches are synced for garbage collector 
I0909 17:15:49.379472       1 shared_informer.go:230] Caches are synced for persistent volume 
I0909 17:15:49.386641       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"280ceaa7-ef9a-423e-b3dd-6039ff7747ef", APIVersion:"apps/v1", ResourceVersion:"236", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
I0909 17:15:49.391120       1 shared_informer.go:230] Caches are synced for GC 
I0909 17:15:49.393295       1 shared_informer.go:230] Caches are synced for taint 
I0909 17:15:49.393654       1 taint_manager.go:187] Starting NoExecuteTaintManager
I0909 17:15:49.393808       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
W0909 17:15:49.393936       1 node_lifecycle_controller.go:1048] Missing timestamp for Node cloud-run-dev-internal. Assuming now as a timestamp.
I0909 17:15:49.394033       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0909 17:15:49.394552       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"cloud-run-dev-internal", UID:"33d04963-153f-4f6c-af22-2244b80a1cd1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node cloud-run-dev-internal event: Registered Node cloud-run-dev-internal in Controller
I0909 17:15:49.397545       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e75de29b-6b23-4240-9861-5301acb41619", APIVersion:"apps/v1", ResourceVersion:"213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-q2swm
I0909 17:15:49.402901       1 shared_informer.go:230] Caches are synced for disruption 
I0909 17:15:49.403029       1 disruption.go:339] Sending events to api server.
I0909 17:15:49.411991       1 shared_informer.go:230] Caches are synced for expand 
E0909 17:15:49.426611       1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"e75de29b-6b23-4240-9861-5301acb41619", ResourceVersion:"213", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735268543, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000b35880), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000b358a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000b358c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000972400), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000b358e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000b35900), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000b35940)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000118b40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000bd22b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0006a1420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000414a78)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000bd2308)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0909 17:15:49.427225       1 shared_informer.go:230] Caches are synced for HPA 
I0909 17:15:49.427263       1 shared_informer.go:230] Caches are synced for ReplicaSet 
I0909 17:15:49.433481       1 shared_informer.go:230] Caches are synced for job 
I0909 17:15:49.435191       1 shared_informer.go:230] Caches are synced for ReplicationController 
I0909 17:15:49.435228       1 shared_informer.go:230] Caches are synced for stateful set 
I0909 17:15:49.435243       1 shared_informer.go:230] Caches are synced for endpoint 
I0909 17:15:49.435252       1 shared_informer.go:230] Caches are synced for endpoint_slice 
I0909 17:15:49.442438       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"a449b255-e94c-4a4a-968c-cd9f3b9020ae", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-747k6
I0909 17:15:49.467420       1 shared_informer.go:230] Caches are synced for garbage collector 
I0909 17:15:49.467464       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0909 17:15:49.467802       1 shared_informer.go:230] Caches are synced for attach detach 
I0909 17:15:49.476906       1 shared_informer.go:230] Caches are synced for PV protection 
I0909 17:15:49.528174       1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0909 17:15:49.528216       1 shared_informer.go:230] Caches are synced for resource quota 
I0909 17:15:54.394945       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0909 17:17:16.579114       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"untitled2", UID:"ca49fe18-5293-48f5-b33d-85e2d70c365b", APIVersion:"apps/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set untitled2-7d5b9859cd to 1
I0909 17:17:16.600588       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"untitled2-7d5b9859cd", UID:"e5f10741-b401-4873-95d2-b65ece132ea8", APIVersion:"apps/v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: untitled2-7d5b9859cd-6gqhg
I0909 17:48:33.835371       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"cloud-run-dev-internal", UID:"33d04963-153f-4f6c-af22-2244b80a1cd1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node cloud-run-dev-internal status is now: NodeNotReady
I0909 17:48:34.018603       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0909 17:48:49.020173       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0909 17:49:47.112125       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"gh-app", UID:"4666c984-0644-486e-85bd-15d49013890a", APIVersion:"apps/v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set gh-app-5f4bd4884c to 1
I0909 17:49:47.156458       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"gh-app-5f4bd4884c", UID:"08eee633-fc85-480c-a8ef-3abdaa86b174", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: gh-app-5f4bd4884c-rm7fw

==> kube-proxy [b1dfee41b79b] <==
W0909 17:15:50.250310       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0909 17:15:50.261612       1 node.go:136] Successfully retrieved node IP: 172.17.0.3
I0909 17:15:50.261679       1 server_others.go:186] Using iptables Proxier.
W0909 17:15:50.261687       1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0909 17:15:50.261692       1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0909 17:15:50.262437       1 server.go:583] Version: v1.18.3
I0909 17:15:50.263235       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0909 17:15:50.263310       1 conntrack.go:52] Setting nf_conntrack_max to 131072
E0909 17:15:50.264083       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0909 17:15:50.264232       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0909 17:15:50.264329       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0909 17:15:50.265553       1 config.go:315] Starting service config controller
I0909 17:15:50.265578       1 shared_informer.go:223] Waiting for caches to sync for service config
I0909 17:15:50.265744       1 config.go:133] Starting endpoints config controller
I0909 17:15:50.266636       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0909 17:15:50.366691       1 shared_informer.go:230] Caches are synced for service config 
I0909 17:15:50.367543       1 shared_informer.go:230] Caches are synced for endpoints config 

==> kube-proxy [f51984c1c831] <==
W0909 17:52:11.342005       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0909 17:52:11.380267       1 node.go:136] Successfully retrieved node IP: 172.17.0.2
I0909 17:52:11.380350       1 server_others.go:186] Using iptables Proxier.
W0909 17:52:11.380792       1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0909 17:52:11.380798       1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0909 17:52:11.381130       1 server.go:583] Version: v1.18.3
I0909 17:52:11.383594       1 conntrack.go:52] Setting nf_conntrack_max to 131072
E0909 17:52:11.384075       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0909 17:52:11.384233       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0909 17:52:11.384413       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0909 17:52:11.390302       1 config.go:315] Starting service config controller
I0909 17:52:11.390344       1 shared_informer.go:223] Waiting for caches to sync for service config
I0909 17:52:11.429839       1 config.go:133] Starting endpoints config controller
I0909 17:52:11.430039       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0909 17:52:11.525224       1 shared_informer.go:230] Caches are synced for service config 
I0909 17:52:11.532293       1 shared_informer.go:230] Caches are synced for endpoints config 

==> kube-scheduler [5036fe59be26] <==
I0909 17:15:35.790610       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0909 17:15:35.790777       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0909 17:15:36.801168       1 serving.go:313] Generated self-signed cert in-memory
W0909 17:15:39.814147       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0909 17:15:39.814191       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0909 17:15:39.814200       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0909 17:15:39.814204       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0909 17:15:39.873087       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0909 17:15:39.873127       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0909 17:15:39.878521       1 authorization.go:47] Authorization is disabled
W0909 17:15:39.878757       1 authentication.go:40] Authentication is disabled
I0909 17:15:39.879035       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0909 17:15:39.888831       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0909 17:15:39.891180       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0909 17:15:39.891218       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0909 17:15:39.891247       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0909 17:15:39.894786       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0909 17:15:39.896489       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0909 17:15:39.896977       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0909 17:15:39.897155       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0909 17:15:39.897229       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0909 17:15:39.897264       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0909 17:15:39.897396       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0909 17:15:39.897687       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0909 17:15:39.897997       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0909 17:15:40.752609       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0909 17:15:40.894546       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0909 17:15:41.025684       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0909 17:15:41.391928       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kube-scheduler [ae8f363d05d6] <==
I0909 17:52:04.160857       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0909 17:52:04.161037       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0909 17:52:04.746012       1 serving.go:313] Generated self-signed cert in-memory
I0909 17:52:07.965120       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0909 17:52:07.965153       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0909 17:52:07.968902       1 authorization.go:47] Authorization is disabled
W0909 17:52:07.968915       1 authentication.go:40] Authentication is disabled
I0909 17:52:07.968924       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0909 17:52:07.970483       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0909 17:52:07.970501       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0909 17:52:07.970533       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0909 17:52:07.970537       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0909 17:52:07.971028       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0909 17:52:07.973727       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0909 17:52:08.071035       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0909 17:52:08.071040       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kubelet <==
-- Logs begin at Wed 2020-09-09 17:51:55 UTC, end at Wed 2020-09-09 18:13:18 UTC. --
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: E0909 18:01:11.052118    4705 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: W0909 18:01:11.071975    4705 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-747k6 through plugin: invalid network status for
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: W0909 18:01:11.073569    4705 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-747k6 through plugin: invalid network status for
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.074253    4705 clientconn.go:106] parsed scheme: "unix"
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.074306    4705 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.074859    4705 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.074897    4705 clientconn.go:933] ClientConn switching balancer to "pick_first"
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.097240    4705 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: E0909 18:01:11.152339    4705 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.156623    4705 kubelet_node_status.go:70] Attempting to register node cloud-run-dev-internal
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.176593    4705 kubelet_node_status.go:112] Node cloud-run-dev-internal was previously registered
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.178098    4705 kubelet_node_status.go:73] Successfully registered node cloud-run-dev-internal
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.207593    4705 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-09-09 18:01:11.207575012 +0000 UTC m=+6.129152873 LastTransitionTime:2020-09-09 18:01:11.207575012 +0000 UTC m=+6.129152873 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.340298    4705 cpu_manager.go:184] [cpumanager] starting with none policy
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.340427    4705 cpu_manager.go:185] [cpumanager] reconciling every 10s
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.340478    4705 state_mem.go:36] [cpumanager] initializing new in-memory state store
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.341070    4705 state_mem.go:88] [cpumanager] updated default cpuset: ""
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.341191    4705 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.341239    4705 policy_none.go:43] [cpumanager] none policy: Start
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.345657    4705 plugin_manager.go:114] Starting Kubelet Plugin Manager
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.353355    4705 topology_manager.go:233] [topologymanager] Topology Admit Handler
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.354846    4705 topology_manager.go:233] [topologymanager] Topology Admit Handler
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.355828    4705 topology_manager.go:233] [topologymanager] Topology Admit Handler
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.358511    4705 topology_manager.go:233] [topologymanager] Topology Admit Handler
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.359957    4705 topology_manager.go:233] [topologymanager] Topology Admit Handler
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.360871    4705 topology_manager.go:233] [topologymanager] Topology Admit Handler
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.365146    4705 topology_manager.go:233] [topologymanager] Topology Admit Handler
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: W0909 18:01:11.367449    4705 pod_container_deletor.go:77] Container "19061c7908c2e421e083a24997c7e120aaff0e45bb595dd1f8e576434306dad9" not found in pod's containers
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: W0909 18:01:11.367718    4705 pod_container_deletor.go:77] Container "d0b691508b40318a578b63d727072acd0a7d26a8cc6b0de47385761c93868b2b" not found in pod's containers
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: W0909 18:01:11.367827    4705 pod_container_deletor.go:77] Container "4d44283e30c55ed757cf89e8d65c5231c5291a2fac353ec1415557d4e9d8dc31" not found in pod's containers
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: W0909 18:01:11.367856    4705 pod_container_deletor.go:77] Container "cf48984d07c65ed013ba5126f9cf1370ab0205ebd4f2a8041cdb8d27cb0a775c" not found in pod's containers
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410250    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-ca-certs") pod "kube-apiserver-cloud-run-dev-internal" (UID: "26697bfbbac02de77868d7e47b99d36f")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410320    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-etc-ca-certificates") pod "kube-apiserver-cloud-run-dev-internal" (UID: "26697bfbbac02de77868d7e47b99d36f")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410342    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-usr-local-share-ca-certificates") pod "kube-controller-manager-cloud-run-dev-internal" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410360    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-usr-share-ca-certificates") pod "kube-controller-manager-cloud-run-dev-internal" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410374    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-k7pl8" (UniqueName: "kubernetes.io/secret/400d26a6-7b46-47b3-a9e1-c2f009d2d245-coredns-token-k7pl8") pod "coredns-66bff467f8-747k6" (UID: "400d26a6-7b46-47b3-a9e1-c2f009d2d245")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410386    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-ds5tg" (UniqueName: "kubernetes.io/secret/b3b10e76-70be-4e5f-b28c-9ff0ab37fde1-kube-proxy-token-ds5tg") pod "kube-proxy-q2swm" (UID: "b3b10e76-70be-4e5f-b28c-9ff0ab37fde1")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410398    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/dcddbd0cc8c89e2cbf4de5d3cca8769f-kubeconfig") pod "kube-scheduler-cloud-run-dev-internal" (UID: "dcddbd0cc8c89e2cbf4de5d3cca8769f")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410414    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/47504f2a53fa15537d86b10984adcc2d-etcd-certs") pod "etcd-cloud-run-dev-internal" (UID: "47504f2a53fa15537d86b10984adcc2d")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410426    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-k8s-certs") pod "kube-apiserver-cloud-run-dev-internal" (UID: "26697bfbbac02de77868d7e47b99d36f")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410440    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-usr-local-share-ca-certificates") pod "kube-apiserver-cloud-run-dev-internal" (UID: "26697bfbbac02de77868d7e47b99d36f")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410451    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-ca-certs") pod "kube-controller-manager-cloud-run-dev-internal" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410465    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-kubeconfig") pod "kube-controller-manager-cloud-run-dev-internal" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410499    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/071cc4ef-6ee5-4668-871c-4131f421dfe8-tmp") pod "storage-provisioner" (UID: "071cc4ef-6ee5-4668-871c-4131f421dfe8")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410517    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b3b10e76-70be-4e5f-b28c-9ff0ab37fde1-kube-proxy") pod "kube-proxy-q2swm" (UID: "b3b10e76-70be-4e5f-b28c-9ff0ab37fde1")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410529    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b3b10e76-70be-4e5f-b28c-9ff0ab37fde1-lib-modules") pod "kube-proxy-q2swm" (UID: "b3b10e76-70be-4e5f-b28c-9ff0ab37fde1")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410544    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-etc-ca-certificates") pod "kube-controller-manager-cloud-run-dev-internal" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410560    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-flexvolume-dir") pod "kube-controller-manager-cloud-run-dev-internal" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410573    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-k8s-certs") pod "kube-controller-manager-cloud-run-dev-internal" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410590    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/47504f2a53fa15537d86b10984adcc2d-etcd-data") pod "etcd-cloud-run-dev-internal" (UID: "47504f2a53fa15537d86b10984adcc2d")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410622    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-cqd67" (UniqueName: "kubernetes.io/secret/071cc4ef-6ee5-4668-871c-4131f421dfe8-storage-provisioner-token-cqd67") pod "storage-provisioner" (UID: "071cc4ef-6ee5-4668-871c-4131f421dfe8")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410645    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/400d26a6-7b46-47b3-a9e1-c2f009d2d245-config-volume") pod "coredns-66bff467f8-747k6" (UID: "400d26a6-7b46-47b3-a9e1-c2f009d2d245")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410673    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b3b10e76-70be-4e5f-b28c-9ff0ab37fde1-xtables-lock") pod "kube-proxy-q2swm" (UID: "b3b10e76-70be-4e5f-b28c-9ff0ab37fde1")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410689    4705 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-usr-share-ca-certificates") pod "kube-apiserver-cloud-run-dev-internal" (UID: "26697bfbbac02de77868d7e47b99d36f")
Sep 09 18:01:11 cloud-run-dev-internal kubelet[4705]: I0909 18:01:11.410698    4705 reconciler.go:157] Reconciler: start to sync state
Sep 09 18:01:12 cloud-run-dev-internal kubelet[4705]: I0909 18:01:12.384031    4705 request.go:621] Throttling request took 1.017729712s, request: GET:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&limit=500&resourceVersion=0
Sep 09 18:01:12 cloud-run-dev-internal kubelet[4705]: E0909 18:01:12.399022    4705 kubelet.go:1663] Failed creating a mirror pod for "kube-scheduler-cloud-run-dev-internal_kube-system(dcddbd0cc8c89e2cbf4de5d3cca8769f)": Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
Sep 09 18:01:12 cloud-run-dev-internal kubelet[4705]: E0909 18:01:12.399216    4705 kubelet.go:1663] Failed creating a mirror pod for "etcd-cloud-run-dev-internal_kube-system(47504f2a53fa15537d86b10984adcc2d)": Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
Sep 09 18:01:12 cloud-run-dev-internal kubelet[4705]: E0909 18:01:12.399880    4705 kubelet.go:1663] Failed creating a mirror pod for "kube-controller-manager-cloud-run-dev-internal_kube-system(8a9925b92c1bf68a9656aa86994b3aca)": Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused
Sep 09 18:01:12 cloud-run-dev-internal kubelet[4705]: E0909 18:01:12.400539    4705 kubelet.go:1663] Failed creating a mirror pod for "kube-apiserver-cloud-run-dev-internal_kube-system(26697bfbbac02de77868d7e47b99d36f)": Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.99.244.159:443: connect: connection refused

==> storage-provisioner [5e7b6ab30497] <==
F0909 17:52:10.657737       1 main.go:39] error getting server version: Get https://10.96.0.1:443/version?timeout=32s: x509: certificate signed by unknown authority

==> storage-provisioner [615ce34eb29c] <==
I0909 17:52:27.931793       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
I0909 17:52:45.336826       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0909 17:52:45.337285       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_cloud-run-dev-internal_490513b9-4a4b-4b1c-91ee-af6150d3ac8d!
I0909 17:52:45.337363       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1ec5dc5-1823-4205-8d48-5a9533d41342", APIVersion:"v1", ResourceVersion:"812", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cloud-run-dev-internal_490513b9-4a4b-4b1c-91ee-af6150d3ac8d became leader
I0909 17:52:45.438264       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_cloud-run-dev-internal_490513b9-4a4b-4b1c-91ee-af6150d3ac8d!

areaddons areprovidegcp kinbug prioritimportant-soon uembedded

Most helpful comment

@medyagh - agreed. This seems to be a short race condition where a webhook is defined, but not yet running. Ideally, we should remove the race condition altogether, but it would be less terrible if we enabled it last or waited for this addon to finish deploying before enabling others.

@matthewmichihara - It's on the list for the next release.

All 6 comments

cc @sharifelgamal

If you're looking for another output (looks to be slightly different), I was able repro too.

โžœ  bin ./minikube start --addons gcp-auth
๐Ÿ˜„  minikube v1.12.3 on Darwin 10.15.6
โœจ  Using the docker driver based on existing profile
๐ŸŽ‰  minikube 1.13.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.13.0
๐Ÿ’ก  To disable this notice, run: 'minikube config set WantUpdateNotification false'

โ—  Requested memory allocation (1990MB) is less than the recommended minimum 2000MB. Kubernetes may crash unexpectedly.
โ—  Your system has 16384MB memory but Docker has only 2996MB. For a better performance increase to at least 3GB.

    Docker for Desktop  > Settings > Resources > Memory


๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ”„  Restarting existing docker container for "minikube" ...
๐Ÿณ  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
๐Ÿ”Ž  Verifying Kubernetes components...
๐Ÿ”Ž  Verifying gcp-auth addon...
๐Ÿ“Œ  Your GCP credentials will now be mounted into every pod created in the minikube cluster.
๐Ÿ“Œ  If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
โ—  Enabling 'gcp-auth' returned an error: running callbacks: [verifying gcp-auth addon pods : timed out waiting for the condition: timed out waiting for the condition]
๐ŸŒŸ  Enabled addons: default-storageclass, gcp-auth, storage-provisioner
๐Ÿ„  Done! kubectl is now configured to use "minikube"

โ—  /usr/local/bin/kubectl is version 1.16.6-beta.0, which may be incompatible with Kubernetes 1.18.3.
๐Ÿ’ก  You can also use 'minikube kubectl -- get pods' to invoke a matching version

I think the problem might be happening because the gcp-auth being enabled before storage-provisioner and the other default addon
maybe we should make sure all addons other than the default addons are enabled and started once these two are enabled

Do you think this is something we can prioritize? The IDEs are looking to leverage this functionality for upcoming features.

@medyagh - agreed. This seems to be a short race condition where a webhook is defined, but not yet running. Ideally, we should remove the race condition altogether, but it would be less terrible if we enabled it last or waited for this addon to finish deploying before enabling others.

@matthewmichihara - It's on the list for the next release.

Thanks for the fix, when should I expect this to show up in gcloud?

Was this page helpful?
0 / 5 - 0 ratings