K3s: Pi cluster not responding after reboot

Created on 7 Dec 2020  Â·  19Comments  Â·  Source: k3s-io/k3s

Environmental Info:
K3s Version: v1.19.4+k3s1

Node(s) CPU architecture, OS, and Version:
All three Pi's are running on HypriotOS version 1.12 but this issue has been also reproduced on Raspbian lite on its last released version.
Master: Linux pi3black 4.19.75-v7+ #1270 SMP Tue Sep 24 18:45:11 BST 2019 armv7l GNU/Linux
Node 1: Linux pi4red 4.19.75-v7l+ #1270 SMP Tue Sep 24 18:51:41 BST 2019 armv7l GNU/Linux
Node 2: Linux pi4blue 4.19.75-v7l+ #1270 SMP Tue Sep 24 18:51:41 BST 2019 armv7l GNU/Linux

Cluster Configuration:
The cluster is built with a Raspberry Pi 3B running as master and two Raspberry Pi 4B (4Gb) as workers. All of them are using a 32Gb microSD.

Describe the bug:
After setting up the cluster with K3s as indicated on the official documentation it works well until a reboot is performed. In this case specifically all three Pi's were shutdown following the recommended procedure before disconnecting power. Once the cluster is powered up again, the master node will be ready but both workers will remain in Not ready status.

The only way to get workers back up will be uninstalling and reinstalling k3s in them.

Two more things have been observed:

  • The master node seems to be rebooted several times for an extended period of time as shown in the logs below.
  • For the first 20 to 30 minutes the green Pi led indicating read/write actions remains on continuously and, although accessible, any request for logs or connection via SSH to the master will take a long time.

Steps To Reproduce:

  • Installed K3s:
    For setting up the cluster this guide was followed https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s without any changes from the steps suggested or extra configurations added except for naming and for the chosen OS (as mentioned before it has been tested also in Raspbian with same results though)

Expected behavior:
Worker nodes running after initial boot without having to uninstall and reinstall them.

Actual behavior:
After reboot the worker nodes remain not ready.

Additional context / logs:
Logs from journalctl -u k3s on master (on the worker nodes there are no logs returned)

Dec 07 18:17:53 pi3black systemd[1]: Starting Lightweight Kubernetes...
Dec 07 18:18:03 pi3black k3s[358]: time="2020-12-07T18:18:03.553343964Z" level=info msg="Starting k3s v1
Dec 07 18:18:03 pi3black k3s[358]: time="2020-12-07T18:18:03.558621906Z" level=info msg="Cluster bootstr
Dec 07 18:18:03 pi3black k3s[358]: time="2020-12-07T18:18:03.849256112Z" level=info msg="Configuring sql
Dec 07 18:18:03 pi3black k3s[358]: time="2020-12-07T18:18:03.849490432Z" level=info msg="Configuring dat
Dec 07 18:18:03 pi3black k3s[358]: time="2020-12-07T18:18:03.851068377Z" level=info msg="Database tables
Dec 07 18:18:03 pi3black k3s[358]: time="2020-12-07T18:18:03.889979150Z" level=info msg="Kine listening 
Dec 07 18:18:03 pi3black k3s[358]: time="2020-12-07T18:18:03.891270746Z" level=info msg="Running kube-ap
Dec 07 18:18:03 pi3black k3s[358]: I1207 18:18:03.899815     358 server.go:652] external host was not sp
Dec 07 18:18:03 pi3black k3s[358]: I1207 18:18:03.922329     358 server.go:177] Version: v1.19.4+k3s1
Dec 07 18:18:04 pi3black k3s[358]: I1207 18:18:04.200762     358 plugins.go:158] Loaded 12 mutating admi
Dec 07 18:18:04 pi3black k3s[358]: I1207 18:18:04.201271     358 plugins.go:161] Loaded 10 validating ad
Dec 07 18:18:04 pi3black k3s[358]: I1207 18:18:04.212908     358 plugins.go:158] Loaded 12 mutating admi
Dec 07 18:18:04 pi3black k3s[358]: I1207 18:18:04.213040     358 plugins.go:161] Loaded 10 validating ad
Dec 07 18:18:04 pi3black k3s[358]: I1207 18:18:04.567379     358 master.go:271] Using reconciler: lease
Dec 07 18:18:05 pi3black k3s[358]: I1207 18:18:05.208941     358 trace.go:205] Trace[998722919]: "List e
Dec 07 18:18:05 pi3black k3s[358]: Trace[998722919]: [947.849559ms] [947.849559ms] END
Dec 07 18:18:05 pi3black k3s[358]: I1207 18:18:05.242085     358 trace.go:205] Trace[499964422]: "List e
Dec 07 18:18:05 pi3black k3s[358]: Trace[499964422]: [944.983559ms] [944.983559ms] END
Dec 07 18:18:05 pi3black k3s[358]: I1207 18:18:05.396305     358 trace.go:205] Trace[1318525765]: "List 
Dec 07 18:18:05 pi3black k3s[358]: Trace[1318525765]: [582.089546ms] [582.089546ms] END
Dec 07 18:18:07 pi3black k3s[358]: W1207 18:18:07.354863     358 genericapiserver.go:412] Skipping API b

Logs from kubectl describe nodes pi3black

Name:               pi3black
Roles:              master
Labels:             beta.kubernetes.io/arch=arm
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    k3s.io/hostname=pi3black
                    k3s.io/internal-ip=192.168.1.148
                    kubernetes.io/arch=arm
                    kubernetes.io/hostname=pi3black
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=true
                    node.kubernetes.io/instance-type=k3s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"22:5b:39:f6:d2:07"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.1.148
                    k3s.io/node-args: ["server"]
                    k3s.io/node-config-hash: RCK5KK43QJVI3DZORFHFQOWFMNEM6XGEJNSFOMGJYIVHABPZMHBQ====
                    k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/b46300d70fe21c458e9a951f12a5c6dd86eb7cf2d0b213bb9ad07dbad435207e"}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 01 Dec 2020 00:45:59 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  pi3black
  AcquireTime:     <unset>
  RenewTime:       Mon, 07 Dec 2020 20:09:10 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 07 Dec 2020 19:43:56 +0100   Mon, 07 Dec 2020 19:43:56 +0100   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Mon, 07 Dec 2020 20:07:14 +0100   Mon, 07 Dec 2020 20:07:14 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 07 Dec 2020 20:07:14 +0100   Mon, 07 Dec 2020 20:07:14 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 07 Dec 2020 20:07:14 +0100   Mon, 07 Dec 2020 20:07:14 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 07 Dec 2020 20:07:14 +0100   Mon, 07 Dec 2020 20:07:14 +0100   KubeletReady                 kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Addresses:
  InternalIP:  192.168.1.148
  Hostname:    pi3black
Capacity:
  cpu:                4
  ephemeral-storage:  28751636Ki
  memory:             999036Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  27969591479
  memory:             999036Ki
  pods:               110
System Info:
  Machine ID:                 92f2e380e2838c11fa712a9f5fc57e1c
  System UUID:                92f2e380e2838c11fa712a9f5fc57e1c
  Boot ID:                    11aa699c-698b-4488-a756-e3f3e2e0600a
  Kernel Version:             4.19.75-v7+
  OS Image:                   Raspbian GNU/Linux 10 (buster)
  Operating System:           linux
  Architecture:               arm
  Container Runtime Version:  containerd://1.4.1-k3s1
  Kubelet Version:            v1.19.4+k3s1
  Kube-Proxy Version:         v1.19.4+k3s1
PodCIDR:                      10.42.0.0/24
PodCIDRs:                     10.42.0.0/24
ProviderID:                   k3s://pi3black
Non-terminated Pods:          (11 in total)
  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
  traefik                     traefik-77fdb5c487-kfjgw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d17h
  monitoring                  prometheus-operator-67755f959-l4vk5       100m (2%)     200m (5%)   100Mi (10%)      200Mi (20%)    10h
  kube-system                 coredns-66c464876b-hspnh                  100m (2%)     0 (0%)      70Mi (7%)        170Mi (17%)    6d19h
  monitoring                  prometheus-adapter-585b57857b-t7t9r       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10h
  traefik                     svclb-traefik-fx2bh                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d18h
  monitoring                  node-exporter-sncps                       112m (2%)     270m (6%)   200Mi (20%)      220Mi (22%)    6d18h
  kube-system                 metrics-server-7b4f8b595-96phd            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d19h
  monitoring                  kube-state-metrics-6cb6df5d4-tl8hg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d17h
  monitoring                  arm-exporter-w2mmz                        60m (1%)      120m (3%)   70Mi (7%)        140Mi (14%)    6d18h
  kube-system                 local-path-provisioner-7ff9579c6-7ngcr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d19h
  monitoring                  grafana-7cccfc9b5f-ndznx                  100m (2%)     200m (5%)   100Mi (10%)      200Mi (20%)    6d17h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                472m (11%)   790m (19%)
  memory             540Mi (55%)  930Mi (95%)
  ephemeral-storage  0 (0%)       0 (0%)
Events:
  Type     Reason                   Age                    From        Message
  ----     ------                   ----                   ----        -------
  Warning  InvalidDiskCapacity      6d17h                  kubelet     invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  6d17h                  kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  6d17h (x2 over 6d17h)  kubelet     Node pi3black status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    6d17h (x2 over 6d17h)  kubelet     Node pi3black status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     6d17h (x2 over 6d17h)  kubelet     Node pi3black status is now: NodeHasSufficientPID
  Warning  Rebooted                 6d17h                  kubelet     Node pi3black has been rebooted, boot id: 51af9949-c8b7-40e8-9b0f-ac2f97352e98
  Normal   Starting                 6d17h                  kube-proxy  Starting kube-proxy.
  Warning  InvalidDiskCapacity      10h                    kubelet     invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  10h                    kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  10h (x2 over 10h)      kubelet     Node pi3black status is now: NodeHasSufficientMemory
  Normal   NodeHasSufficientPID     10h (x2 over 10h)      kubelet     Node pi3black status is now: NodeHasSufficientPID
  Normal   NodeHasNoDiskPressure    10h (x2 over 10h)      kubelet     Node pi3black status is now: NodeHasNoDiskPressure
  Normal   Starting                 10h                    kube-proxy  Starting kube-proxy.
  Warning  Rebooted                 10h                    kubelet     Node pi3black has been rebooted, boot id: 6c45a63e-d319-49c9-9560-f861d6b76ac0
  Normal   Starting                 9h                     kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      9h                     kubelet     invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  9h                     kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 9h                     kube-proxy  Starting kube-proxy.
  Warning  Rebooted                 9h                     kubelet     Node pi3black has been rebooted, boot id: 70c5ddfc-bea7-4f1e-bcdb-fa0e9ba4c42a
  Normal   NodeHasSufficientMemory  9h (x3 over 9h)        kubelet     Node pi3black status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    9h (x3 over 9h)        kubelet     Node pi3black status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     9h (x3 over 9h)        kubelet     Node pi3black status is now: NodeHasSufficientPID
  Normal   NodeReady                9h                     kubelet     Node pi3black status is now: NodeReady
  Normal   Starting                 9h                     kube-proxy  Starting kube-proxy.
  Warning  InvalidDiskCapacity      9h                     kubelet     invalid capacity 0 on image filesystem
  Normal   Starting                 9h                     kubelet     Starting kubelet.
  Warning  Rebooted                 9h                     kubelet     Node pi3black has been rebooted, boot id: 3b71c7e7-92c7-436f-8d2b-751bd54e991b
  Normal   NodeAllocatableEnforced  9h                     kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  9h (x2 over 9h)        kubelet     Node pi3black status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    9h (x2 over 9h)        kubelet     Node pi3black status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     9h (x2 over 9h)        kubelet     Node pi3black status is now: NodeHasSufficientPID
  Normal   NodeNotReady             9h                     kubelet     Node pi3black status is now: NodeNotReady
  Normal   NodeReady                9h                     kubelet     Node pi3black status is now: NodeReady
  Warning  InvalidDiskCapacity      50m                    kubelet     invalid capacity 0 on image filesystem
  Warning  InvalidDiskCapacity      49m                    kubelet     invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  49m                    kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasNoDiskPressure    49m (x2 over 49m)      kubelet     Node pi3black status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     49m (x2 over 49m)      kubelet     Node pi3black status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  49m (x2 over 49m)      kubelet     Node pi3black status is now: NodeHasSufficientMemory
  Normal   Starting                 49m                    kube-proxy  Starting kube-proxy.
  Warning  Rebooted                 49m                    kubelet     Node pi3black has been rebooted, boot id: ce845a49-c0c8-4bde-8baa-37f72c29f0f0
  Normal   NodeAllocatableEnforced  25m                    kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 25m                    kube-proxy  Starting kube-proxy.
  Warning  Rebooted                 25m                    kubelet     Node pi3black has been rebooted, boot id: 11aa699c-698b-4488-a756-e3f3e2e0600a
  Normal   NodeHasSufficientPID     13m (x6 over 50m)      kubelet     Node pi3black status is now: NodeHasSufficientPID
  Normal   NodeReady                13m (x4 over 21m)      kubelet     Node pi3black status is now: NodeReady
  Normal   NodeHasNoDiskPressure    12m (x7 over 50m)      kubelet     Node pi3black status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientMemory  2m6s (x12 over 50m)    kubelet     Node pi3black status is now: NodeHasSufficientMemory

Logs from kubectl describe nodes pi4red (same events for the other agent)

Name:               pi4red
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    k3s.io/hostname=pi4red
                    k3s.io/internal-ip=192.168.1.147
                    kubernetes.io/arch=arm
                    kubernetes.io/hostname=pi4red
                    kubernetes.io/os=linux
                    node.kubernetes.io/instance-type=k3s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"42:e7:ce:3d:4c:a5"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.1.147
                    k3s.io/node-args: ["agent"]
                    k3s.io/node-config-hash: SQEHIF5GT7BFJ4ANSFICCXHSRBGLTAIIPZM3CE6NFNP4H3V2APIA====
                    k3s.io/node-env:
                      {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/b46300d70fe21c458e9a951f12a5c6dd86eb7cf2d0b213bb9ad07dbad435207e","K3S_TOKEN":"********","K3S_U...
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 01 Dec 2020 00:51:56 +0100
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  pi4red
  AcquireTime:     <unset>
  RenewTime:       Tue, 01 Dec 2020 02:29:01 +0100
Conditions:
  Type                 Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----                 ------    -----------------                 ------------------                ------              -------
  NetworkUnavailable   False     Tue, 01 Dec 2020 02:20:52 +0100   Tue, 01 Dec 2020 02:20:52 +0100   FlannelIsUp         Flannel is running on this node
  MemoryPressure       Unknown   Tue, 01 Dec 2020 02:26:08 +0100   Tue, 01 Dec 2020 02:29:42 +0100   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure         Unknown   Tue, 01 Dec 2020 02:26:08 +0100   Tue, 01 Dec 2020 02:29:42 +0100   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure          Unknown   Tue, 01 Dec 2020 02:26:08 +0100   Tue, 01 Dec 2020 02:29:42 +0100   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready                Unknown   Tue, 01 Dec 2020 02:26:08 +0100   Tue, 01 Dec 2020 02:29:42 +0100   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  192.168.1.147
  Hostname:    pi4red
Capacity:
  cpu:                4
  ephemeral-storage:  29278068Ki
  memory:             4051024Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  28481704529
  memory:             4051024Ki
  pods:               110
System Info:
  Machine ID:                 0c614fc95172029ca90987fe5fc57ee3
  System UUID:                0c614fc95172029ca90987fe5fc57ee3
  Boot ID:                    a38c1493-acc5-487d-81d2-71deeced9cc9
  Kernel Version:             4.19.75-v7l+
  OS Image:                   Raspbian GNU/Linux 10 (buster)
  Operating System:           linux
  Architecture:               arm
  Container Runtime Version:  containerd://1.4.1-k3s1
  Kubelet Version:            v1.19.4+k3s1
  Kube-Proxy Version:         v1.19.4+k3s1
PodCIDR:                      10.42.2.0/24
PodCIDRs:                     10.42.2.0/24
ProviderID:                   k3s://pi4red
Non-terminated Pods:          (7 in total)
  Namespace                   Name                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                  ------------  ----------  ---------------  -------------  ---
  traefik                     svclb-traefik-7jdkk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d18h
  monitoring                  arm-exporter-jwl4m                    60m (1%)      120m (3%)   70Mi (1%)        140Mi (3%)     6d18h
  monitoring                  node-exporter-f52rn                   112m (2%)     270m (6%)   200Mi (5%)       220Mi (5%)     6d18h
  monitoring                  grafana-7cccfc9b5f-7njv9              100m (2%)     200m (5%)   100Mi (2%)       200Mi (5%)     6d18h
  traefik                     traefik-77fdb5c487-phblc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d18h
  monitoring                  kube-state-metrics-6cb6df5d4-kkxcd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d18h
  monitoring                  alertmanager-main-0                   100m (2%)     100m (2%)   225Mi (5%)       25Mi (0%)      6d18h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                372m (9%)    690m (17%)
  memory             595Mi (15%)  585Mi (14%)
  ephemeral-storage  0 (0%)       0 (0%)
Events:
  Type     Reason                   Age    From        Message
  ----     ------                   ----   ----        -------
  Normal   Starting                 6d17h  kube-proxy  Starting kube-proxy.
  Normal   Starting                 6d17h  kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      6d17h  kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  6d17h  kubelet     Node pi4red status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    6d17h  kubelet     Node pi4red status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     6d17h  kubelet     Node pi4red status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  6d17h  kubelet     Updated Node Allocatable limit across pods
  Warning  Rebooted                 6d17h  kubelet     Node pi4red has been rebooted, boot id: a38c1493-acc5-487d-81d2-71deeced9cc9

Logs from kubectl get nodes -w over 30 minutes after boot

pi3black   Ready      master   6d18h   v1.19.4+k3s1
pi4red     NotReady   <none>   6d18h   v1.19.4+k3s1
pi4blue    NotReady   <none>   6d18h   v1.19.4+k3s1
pi4blue    NotReady   <none>   6d18h   v1.19.4+k3s1
pi4blue    NotReady   <none>   6d18h   v1.19.4+k3s1
pi3black   NotReady   master   6d19h   v1.19.4+k3s1
pi3black   NotReady   master   6d19h   v1.19.4+k3s1
pi4red     NotReady   <none>   6d18h   v1.19.4+k3s1
pi3black   Ready      master   6d19h   v1.19.4+k3s1
pi3black   Ready      master   6d19h   v1.19.4+k3s1
pi4red     NotReady   <none>   6d19h   v1.19.4+k3s1
pi3black   Ready      master   6d19h   v1.19.4+k3s1
pi4red     NotReady   <none>   6d19h   v1.19.4+k3s1
pi3black   NotReady   master   6d19h   v1.19.4+k3s1
pi3black   NotReady   master   6d19h   v1.19.4+k3s1
pi3black   NotReady   master   6d19h   v1.19.4+k3s1
pi4red     NotReady   <none>   6d19h   v1.19.4+k3s1
pi4blue    NotReady   <none>   6d19h   v1.19.4+k3s1
pi3black   Ready      master   6d19h   v1.19.4+k3s1
pi3black   Ready      master   6d19h   v1.19.4+k3s1
pi3black   Ready      master   6d19h   v1.19.4+k3s1

All 19 comments

I would probably taint the server to ensure that it doesn't end up with any pods running on it. Either that or make one of the 4Bs the master. The 3s are capable of running k3s plus a small number of pods, but due to slow SD card IO will frequently struggle to keep up with datastore operations while also pulling and running images at the same time. If you do want to run it on the 3, besides tainting it, you should also ensure you're using a high-speed SD card, or external (USB) storage.

I'm seeing very similar behaviour, but my problems start from installation on master node:

pi@rpi3:~ $ free -m
total        used        free      shared  buff/cache   available
Mem:            962         100         484          12         377         793
Swap:            99           0          99
pi@rpi3:~ $ curl -sfL https://get.k3s.io | sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.19.4+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.19.4+k3s1/sha256sum-arm64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.19.4+k3s1/k3s-arm64
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s



stuck here for more than 15minutes now

high disk usage and then run out of memory. I can disconnect it from power, it will run ok for less than a minute, then stop responding again. It has to be a new regression and I was able to create a cluster 3-4 months ago on the same rpi and high speed SD card.

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.17.14+k3s3 sh -
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.18.12+k3s2 sh -

start successfully. it's v1.19 regression

@agilob can you provide logs? journalctl --no-pager -u k3s
Are you running arm64 on a Pi3? As far as I know they don't actually benefit much from arm64 because they have so little memory.
Also, it looks like you have swap enabled. Best practice for Kubernetes is to run without swap, especially on systems where swap is on slow storage like a SD card.

Here's that same release running on my Pi3b, freshly installed just now:

root@pi03:~# curl -sfL https://get.k3s.io | sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.19.4+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.19.4+k3s1/sha256sum-arm.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.19.4+k3s1/k3s-armhf
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

root@pi03:~# kubectl get nodes -o wide
NAME             STATUS   ROLES    AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION     CONTAINER-RUNTIME
pi03.lan.khaus   Ready    master   17s   v1.19.4+k3s1   10.0.1.25     <none>        Ubuntu 20.10   5.8.0-1007-raspi   containerd://1.4.1-k3s1

root@pi03:~# free
              total        used        free      shared  buff/cache   available
Mem:         873248      471092       10860       12304      391296      374748
Swap:             0           0           0

root@pi03:~# uname -a
Linux pi03.lan.khaus 5.8.0-1007-raspi #10-Ubuntu SMP PREEMPT Thu Nov 5 18:01:40 UTC 2020 armv7l armv7l armv7l GNU/Linux

@agilob can you provide logs? journalctl --no-pager -u k3s

well, not really, it takes 3-5 seconds after k3s-server is started for system to become unresponsive, so not much time to execute the command

Are you running arm64 on a Pi3?

Yes. Raspbian 64bits.

As far as I know they don't actually benefit much from arm64 because they have so little memory.

I run it just because I can, not for performance or anything.

Also, it looks like you have swap enabled. Best practice for Kubernetes is to run without swap, especially on systems where swap is on slow storage like a SD card.

freshly installed just now:

I tried also on 64bits Ubuntu with much slower SD card and same effect.

Nevertheless, I installed k3s just for testing. All my attempts (since 2 years ago, 0 success) to try use k3s at home fail tragically with random tls errors. k3s never starts correctly, crashes often and even when starts and runs slave nodes have problems (re)connecting due to many tls errors like #1884 or many others so I've already uninstalled k3s from all my devices.

Dec 08 18:33:54 rpi3 k3s[3072]: E1208 18:33:54.750766    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:54 rpi3 k3s[3072]: E1208 18:33:54.851642    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:54 rpi3 k3s[3072]: E1208 18:33:54.952856    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.053569    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.154264    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: time="2020-12-08T18:33:55.192865300Z" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.255534    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.356598    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.456957    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.559702    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.662447    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.762740    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.863125    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: E1208 18:33:55.964129    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:55 rpi3 k3s[3072]: time="2020-12-08T18:33:55.969066142Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.065040    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.165457    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.229118    3072 node.go:125] Failed to retrieve node info: nodes "rpi3" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.266531    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.368506    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.469936    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:56 rpi3 k3s[3072]: time="2020-12-08T18:33:56.529980809Z" level=info msg="Waiting for node rpi3: nodes \"rpi3\" not found"
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.571054    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.673607    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.776471    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.878741    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:56 rpi3 k3s[3072]: E1208 18:33:56.980964    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.086352    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.188085    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.290636    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.391590    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.492055    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.592316    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.695001    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.796136    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.898382    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:57 rpi3 k3s[3072]: E1208 18:33:57.999090    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:58 rpi3 k3s[3072]: E1208 18:33:58.100080    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:58 rpi3 k3s[3072]: E1208 18:33:58.200910    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:58 rpi3 k3s[3072]: E1208 18:33:58.302300    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:58 rpi3 k3s[3072]: E1208 18:33:58.402672    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:58 rpi3 k3s[3072]: E1208 18:33:58.503007    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:58 rpi3 k3s[3072]: time="2020-12-08T18:33:58.567570945Z" level=info msg="Waiting for node rpi3: nodes \"rpi3\" not found"
Dec 08 18:33:58 rpi3 k3s[3072]: E1208 18:33:58.604557    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:58 rpi3 k3s[3072]: E1208 18:33:58.705283    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:58 rpi3 k3s[3072]: E1208 18:33:58.806172    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:58 rpi3 k3s[3072]: E1208 18:33:58.907480    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.008020    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.113023    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.228752    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.330816    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.434107    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.537034    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.637700    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.738114    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.838735    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:33:59 rpi3 k3s[3072]: E1208 18:33:59.939357    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.041896    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.145699    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: time="2020-12-08T18:34:00.203731067Z" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.246757    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.348715    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.449649    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.550054    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: time="2020-12-08T18:34:00.604340165Z" level=info msg="Waiting for node rpi3: nodes \"rpi3\" not found"
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.652709    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.753144    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.855938    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:00 rpi3 k3s[3072]: E1208 18:34:00.956796    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.057000    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.161374    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.262092    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.362918    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.463555    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.564748    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.665035    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.766035    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.867003    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:01 rpi3 k3s[3072]: E1208 18:34:01.968229    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.069559    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.170040    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.271173    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.372425    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.475208    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.577051    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: time="2020-12-08T18:34:02.641751483Z" level=info msg="Waiting for node rpi3: nodes \"rpi3\" not found"
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.678934    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.781235    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.884516    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.917120    3072 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.917113    3072 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.920205    3072 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.926280    3072 secure_serving.go:197] Serving securely on 127.0.0.1:6444
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.926439    3072 tlsconfig.go:240] Starting DynamicServingCertificateController
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.926645    3072 apiservice_controller.go:97] Starting APIServiceRegistrationController
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.926728    3072 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.927054    3072 autoregister_controller.go:141] Starting autoregister controller
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.927121    3072 cache.go:32] Waiting for caches to sync for autoregister controller
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.931916    3072 customresource_discovery_controller.go:209] Starting DiscoveryController
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.932264    3072 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.932365    3072 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.932741    3072 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.933190    3072 crdregistration_controller.go:111] Starting crd-autoregister controller
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.933262    3072 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.933438    3072 controller.go:86] Starting OpenAPI controller
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.933581    3072 naming_controller.go:291] Starting NamingConditionController
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.933749    3072 establishing_controller.go:76] Starting EstablishingController
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.934036    3072 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.934201    3072 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.934340    3072 crd_finalizer.go:266] Starting CRDFinalizer
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.939106    3072 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.939389    3072 available_controller.go:457] Starting AvailableConditionController
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.939470    3072 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.939626    3072 controller.go:83] Starting OpenAPI AggregationController
Dec 08 18:34:02 rpi3 k3s[3072]: I1208 18:34:02.939401    3072 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
Dec 08 18:34:02 rpi3 k3s[3072]: E1208 18:34:02.984866    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:03 rpi3 k3s[3072]: E1208 18:34:03.091823    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:03 rpi3 k3s[3072]: E1208 18:34:03.211051    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.292313    3072 trace.go:205] Trace[885658523]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm64) kubernetes/2532c10,client:127.0.0.1 (08-Dec-2020 18:33:53.285) (total time: 10006ms):
Dec 08 18:34:03 rpi3 k3s[3072]: Trace[885658523]: [10.006453622s] [10.006453622s] END
Dec 08 18:34:03 rpi3 k3s[3072]: E1208 18:34:03.323645    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:03 rpi3 k3s[3072]: time="2020-12-08T18:34:03.386241102Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
Dec 08 18:34:03 rpi3 k3s[3072]: E1208 18:34:03.445568    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:03 rpi3 k3s[3072]: E1208 18:34:03.493942    3072 controller.go:228] failed to get node "rpi3" when trying to set owner ref to the node lease: nodes "rpi3" not found
Dec 08 18:34:03 rpi3 k3s[3072]: E1208 18:34:03.510708    3072 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"rpi3.164ed17649e590c1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"rpi3", UID:"rpi3", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"rpi3"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfec0fa1bb3ed4c1, ext:33294385998, loc:(*time.Location)(0x65be780)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfec0fa1bb3ed4c1, ext:33294385998, loc:(*time.Location)(0x65be780)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "rpi3.164ed17649e590c1" is forbidden: not yet ready to handle request' (will not retry!)
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.527121    3072 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.527464    3072 cache.go:39] Caches are synced for autoregister controller
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.532656    3072 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.534442    3072 shared_informer.go:247] Caches are synced for crd-autoregister
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.540657    3072 cache.go:39] Caches are synced for AvailableConditionController controller
Dec 08 18:34:03 rpi3 k3s[3072]: E1208 18:34:03.548786    3072 kubelet.go:2183] node "rpi3" not found
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.620688    3072 trace.go:205] Trace[249778785]: "Create" url:/api/v1/nodes,user-agent:k3s/v1.19.4+k3s1 (linux/arm64) kubernetes/2532c10,client:127.0.0.1 (08-Dec-2020 18:33:54.155) (total time: 9465ms):
Dec 08 18:34:03 rpi3 k3s[3072]: Trace[249778785]: ---"Object stored in database" 9463ms (18:34:00.619)
Dec 08 18:34:03 rpi3 k3s[3072]: Trace[249778785]: [9.465070253s] [9.465070253s] END
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.629508    3072 kubelet_node_status.go:73] Successfully registered node rpi3
Dec 08 18:34:03 rpi3 k3s[3072]: E1208 18:34:03.635735    3072 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time
Dec 08 18:34:03 rpi3 k3s[3072]: E1208 18:34:03.657323    3072 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.246, ResourceVersion: 0, AdditionalErrorMsg:
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.920374    3072 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.924393    3072 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Dec 08 18:34:04 rpi3 k3s[3072]: I1208 18:34:04.179528    3072 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
Dec 08 18:34:04 rpi3 k3s[3072]: I1208 18:34:04.351756    3072 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
Dec 08 18:34:04 rpi3 k3s[3072]: I1208 18:34:04.351911    3072 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
Dec 08 18:34:04 rpi3 k3s[3072]: time="2020-12-08T18:34:04.431836422Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
Dec 08 18:34:04 rpi3 k3s[3072]: time="2020-12-08T18:34:04.676330952Z" level=info msg="Waiting for node rpi3 CIDR not assigned yet"
Dec 08 18:34:05 rpi3 k3s[3072]: time="2020-12-08T18:34:05.214830897Z" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Dec 08 18:34:05 rpi3 k3s[3072]: time="2020-12-08T18:34:05.473473287Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
Dec 08 18:34:06 rpi3 k3s[3072]: time="2020-12-08T18:34:06.508794519Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
Dec 08 18:34:06 rpi3 k3s[3072]: time="2020-12-08T18:34:06.701369744Z" level=info msg="Waiting for node rpi3 CIDR not assigned yet"

@brandond I have followed your advice and reconfigured the cluster to set up one of the Pi 4 as master. In the new configuration there is one Pi4 as master and the other as worker node (the Pi 3B is unresponsive, I need to troubleshoot what happened to it).

I am getting the same issue as before, if I shutdown the devices following the appropriate procedure when I reboot then the master is up and running but the worker goes into not ready status and never recovers. Immediately after powering up the cluster I run kubectl get nodes and both appeared running but a minute later it changed and has been unresponsive since.

These are the logs from kubectl describe. Master has a normal log but in the worker no activity is registered whatsoever.

For the rest everything seems to work normally. The pods assigned to the worker has been relaunched in master.

PD: To give more input regarding the hardware, in both Pi's the SD cards are high speed ones.

Logs from pi4blue

Name:               pi4blue
Roles:              master
Labels:             beta.kubernetes.io/arch=arm
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    k3s.io/hostname=pi4blue
                    k3s.io/internal-ip=192.168.1.142
                    kubernetes.io/arch=arm
                    kubernetes.io/hostname=pi4blue
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=true
                    node.kubernetes.io/instance-type=k3s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"fe:ba:e4:25:87:a9"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.1.142
                    k3s.io/node-args: ["server"]
                    k3s.io/node-config-hash: RCK5KK43QJVI3DZORFHFQOWFMNEM6XGEJNSFOMGJYIVHABPZMHBQ====
                    k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/b46300d70fe21c458e9a951f12a5c6dd86eb7cf2d0b213bb9ad07dbad435207e"}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 08 Dec 2020 17:22:07 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  pi4blue
  AcquireTime:     <unset>
  RenewTime:       Tue, 08 Dec 2020 19:47:54 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 08 Dec 2020 19:41:41 +0100   Tue, 08 Dec 2020 19:41:41 +0100   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 08 Dec 2020 19:46:48 +0100   Tue, 08 Dec 2020 17:22:03 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 08 Dec 2020 19:46:48 +0100   Tue, 08 Dec 2020 17:22:03 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 08 Dec 2020 19:46:48 +0100   Tue, 08 Dec 2020 17:22:03 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 08 Dec 2020 19:46:48 +0100   Tue, 08 Dec 2020 17:22:17 +0100   KubeletReady                 kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Addresses:
  InternalIP:  192.168.1.142
  Hostname:    pi4blue
Capacity:
  cpu:                4
  ephemeral-storage:  29278068Ki
  memory:             4051024Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  28481704529
  memory:             4051024Ki
  pods:               110
System Info:
  Machine ID:                 9003dfb694cb6cb8f8a5b1a95fc57f34
  System UUID:                9003dfb694cb6cb8f8a5b1a95fc57f34
  Boot ID:                    b86ebec4-cc7b-44b3-933e-435c8b1a133b
  Kernel Version:             4.19.75-v7l+
  OS Image:                   Raspbian GNU/Linux 10 (buster)
  Operating System:           linux
  Architecture:               arm
  Container Runtime Version:  containerd://1.4.1-k3s1
  Kubelet Version:            v1.19.4+k3s1
  Kube-Proxy Version:         v1.19.4+k3s1
PodCIDR:                      10.42.0.0/24
PodCIDRs:                     10.42.0.0/24
ProviderID:                   k3s://pi4blue
Non-terminated Pods:          (9 in total)
  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
  monitoring                  prometheus-adapter-585b57857b-s4t9n       0 (0%)        0 (0%)      0 (0%)           0 (0%)         117m
  monitoring                  arm-exporter-6cpf8                        60m (1%)      120m (3%)   70Mi (1%)        140Mi (3%)     118m
  kube-system                 local-path-provisioner-7ff9579c6-q96pm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         145m
  monitoring                  node-exporter-xw9l8                       112m (2%)     270m (6%)   200Mi (5%)       220Mi (5%)     117m
  kube-system                 metrics-server-7b4f8b595-7wvkr            0 (0%)        0 (0%)      0 (0%)           0 (0%)         145m
  kube-system                 svclb-traefik-5gxj8                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         144m
  kube-system                 coredns-66c464876b-zw8rc                  100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     145m
  kube-system                 traefik-5dd496474-62jf8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         144m
  monitoring                  prometheus-k8s-0                          200m (5%)     200m (5%)   450Mi (11%)      50Mi (1%)      117m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                472m (11%)   590m (14%)
  memory             790Mi (19%)  580Mi (14%)
  ephemeral-storage  0 (0%)       0 (0%)
Events:
  Type     Reason                   Age    From                 Message
  ----     ------                   ----   ----                 -------
  Normal   Starting                 6m21s  kubelet, pi4blue     Starting kubelet.
  Warning  InvalidDiskCapacity      6m21s  kubelet, pi4blue     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  6m20s  kubelet, pi4blue     Node pi4blue status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    6m20s  kubelet, pi4blue     Node pi4blue status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     6m20s  kubelet, pi4blue     Node pi4blue status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  6m20s  kubelet, pi4blue     Updated Node Allocatable limit across pods
  Warning  Rebooted                 6m14s  kubelet, pi4blue     Node pi4blue has been rebooted, boot id: b86ebec4-cc7b-44b3-933e-435c8b1a133b
  Normal   Starting                 6m13s  kube-proxy, pi4blue  Starting kube-proxy.

Logs from pi4red

Name:               pi4red
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    k3s.io/hostname=pi4red
                    k3s.io/internal-ip=192.168.1.143
                    kubernetes.io/arch=arm
                    kubernetes.io/hostname=pi4red
                    kubernetes.io/os=linux
                    node.kubernetes.io/instance-type=k3s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"ae:96:67:9b:4b:92"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.1.143
                    k3s.io/node-args: ["agent"]
                    k3s.io/node-config-hash: PYSY2K536A6SKSWOUREBPNXZ4NS5NEVH5ZDOE6NXMV5ULKFARC4A====
                    k3s.io/node-env:
                      {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/b46300d70fe21c458e9a951f12a5c6dd86eb7cf2d0b213bb9ad07dbad435207e","K3S_TOKEN":"********","K3S_U...
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 08 Dec 2020 17:26:54 +0100
Taints:             node.kubernetes.io/unreachable:NoExecute
                    node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  pi4red
  AcquireTime:     <unset>
  RenewTime:       Tue, 08 Dec 2020 18:28:07 +0100
Conditions:
  Type                 Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----                 ------    -----------------                 ------------------                ------              -------
  NetworkUnavailable   False     Tue, 08 Dec 2020 17:26:57 +0100   Tue, 08 Dec 2020 17:26:57 +0100   FlannelIsUp         Flannel is running on this node
  MemoryPressure       Unknown   Tue, 08 Dec 2020 18:26:32 +0100   Tue, 08 Dec 2020 19:42:52 +0100   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure         Unknown   Tue, 08 Dec 2020 18:26:32 +0100   Tue, 08 Dec 2020 19:42:52 +0100   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure          Unknown   Tue, 08 Dec 2020 18:26:32 +0100   Tue, 08 Dec 2020 19:42:52 +0100   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready                Unknown   Tue, 08 Dec 2020 18:26:32 +0100   Tue, 08 Dec 2020 19:42:52 +0100   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  192.168.1.143
  Hostname:    pi4red
Capacity:
  cpu:                4
  ephemeral-storage:  29278068Ki
  memory:             4051024Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  28481704529
  memory:             4051024Ki
  pods:               110
System Info:
  Machine ID:                 0c614fc95172029ca90987fe5fc57ee3
  System UUID:                0c614fc95172029ca90987fe5fc57ee3
  Boot ID:                    cc7e14dc-bffa-4d03-a837-6754d86c3e01
  Kernel Version:             4.19.75-v7l+
  OS Image:                   Raspbian GNU/Linux 10 (buster)
  Operating System:           linux
  Architecture:               arm
  Container Runtime Version:  containerd://1.4.1-k3s1
  Kubelet Version:            v1.19.4+k3s1
  Kube-Proxy Version:         v1.19.4+k3s1
PodCIDR:                      10.42.2.0/24
PodCIDRs:                     10.42.2.0/24
ProviderID:                   k3s://pi4red
Non-terminated Pods:          (7 in total)
  Namespace                   Name                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                   ------------  ----------  ---------------  -------------  ---
  monitoring                  arm-exporter-k5hxx                     60m (1%)      120m (3%)   70Mi (1%)        140Mi (3%)     119m
  monitoring                  node-exporter-hcwdr                    112m (2%)     270m (6%)   200Mi (5%)       220Mi (5%)     119m
  kube-system                 svclb-traefik-qql2l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         142m
  monitoring                  grafana-7cccfc9b5f-8dk6n               100m (2%)     200m (5%)   100Mi (2%)       200Mi (5%)     119m
  monitoring                  kube-state-metrics-6cb6df5d4-qvptw     0 (0%)        0 (0%)      0 (0%)           0 (0%)         119m
  monitoring                  alertmanager-main-0                    100m (2%)     100m (2%)   225Mi (5%)       25Mi (0%)      119m
  monitoring                  prometheus-operator-67755f959-m4xgr    100m (2%)     200m (5%)   100Mi (2%)       200Mi (5%)     119m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                472m (11%)   890m (22%)
  memory             695Mi (17%)  785Mi (19%)
  ephemeral-storage  0 (0%)       0 (0%)
Events:              <none>

@agilob your response times are way too long. I suspect that your SD card isn't quite up to the task - different vendors and models make way more of a difference than you might suspect. You NEVER want to see times over a couple seconds; at 10 seconds internal Kubernetes components time out and will cause fatal errors that terminate the process. Here are times from your log.

Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.292313    3072 trace.go:205] Trace[885658523]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm64) kubernetes/2532c10,client:127.0.0.1 (08-Dec-2020 18:33:53.285) (total time: 10006ms):
Dec 08 18:34:03 rpi3 k3s[3072]: Trace[885658523]: [10.006453622s] [10.006453622s] END
--
Dec 08 18:34:03 rpi3 k3s[3072]: I1208 18:34:03.620688    3072 trace.go:205] Trace[249778785]: "Create" url:/api/v1/nodes,user-agent:k3s/v1.19.4+k3s1 (linux/arm64) kubernetes/2532c10,client:127.0.0.1 (08-Dec-2020 18:33:54.155) (total time: 9465ms):
Dec 08 18:34:03 rpi3 k3s[3072]: Trace[249778785]: ---"Object stored in database" 9463ms (18:34:00.619)
Dec 08 18:34:03 rpi3 k3s[3072]: Trace[249778785]: [9.465070253s] [9.465070253s] END

Here are worst-case times from my node:

root@pi03:~# journalctl -u k3s | grep -C 1 'Object stored'
Dec 08 09:48:18 pi03.lan.khaus k3s[2161]: Trace[1030085022]: ---"About to apply patch" 849ms (09:48:00.562)
Dec 08 09:48:18 pi03.lan.khaus k3s[2161]: Trace[1030085022]: ---"Object stored in database" 257ms (09:48:00.827)
Dec 08 09:48:18 pi03.lan.khaus k3s[2161]: Trace[1030085022]: [1.14311659s] [1.14311659s] END
--
Dec 08 09:48:19 pi03.lan.khaus k3s[2161]: I1208 09:48:19.529050    2161 trace.go:205] Trace[1425473484]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/traefik-74wwv,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (08-Dec-2020 09:48:18.960) (total time: 568ms):
Dec 08 09:48:19 pi03.lan.khaus k3s[2161]: Trace[1425473484]: ---"Object stored in database" 567ms (09:48:00.528)
Dec 08 09:48:19 pi03.lan.khaus k3s[2161]: Trace[1425473484]: [568.118123ms] [568.118123ms] END
--
Dec 08 09:48:19 pi03.lan.khaus k3s[2161]: I1208 09:48:19.616776    2161 trace.go:205] Trace[1254621603]: "Patch" url:/apis/apps/v1/namespaces/kube-system/daemonsets/svclb-traefik,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (08-Dec-2020 09:48:19.037) (total time: 578ms):
Dec 08 09:48:19 pi03.lan.khaus k3s[2161]: Trace[1254621603]: ---"Object stored in database" 555ms (09:48:00.604)
Dec 08 09:48:19 pi03.lan.khaus k3s[2161]: Trace[1254621603]: [578.514837ms] [578.514837ms] END
--
Dec 08 09:48:20 pi03.lan.khaus k3s[2161]: I1208 09:48:20.111368    2161 trace.go:205] Trace[950740528]: "Update" url:/api/v1/namespaces/kube-system/endpoints/rancher.io-local-path,user-agent:local-path-provisioner/v0.0.0 (linux/arm) kubernetes/$Format,client:10.42.0.5 (08-Dec-2020 09:48:19.022) (total time: 1088ms):
Dec 08 09:48:20 pi03.lan.khaus k3s[2161]: Trace[950740528]: ---"Object stored in database" 1087ms (09:48:00.110)
Dec 08 09:48:20 pi03.lan.khaus k3s[2161]: Trace[950740528]: [1.088999454s] [1.088999454s] END

@Miguelerja the describe nodes output includes events but not logs. Can you get the actual k3s service logs from both nodes?

@brandond Sorry, I am pretty new on this. Here you go.

Logs master journalctl -u k3s. There are no logs thrown in the worker.

-- Logs begin at Thu 2019-02-14 10:11:59 UTC. --
Dec 08 19:21:36 pi4blue k3s[967]: W1208 19:21:36.218808     967 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Dec 08 19:21:36 pi4blue k3s[967]: W1208 19:21:36.218937     967 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Dec 08 19:21:36 pi4blue k3s[967]: W1208 19:21:36.219043     967 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Dec 08 19:21:36 pi4blue k3s[967]: W1208 19:21:36.219147     967 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Dec 08 19:21:36 pi4blue k3s[967]: E1208 19:21:36.219192     967 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0
Dec 08 19:21:36 pi4blue k3s[967]: W1208 19:21:36.219749     967 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Dec 08 19:21:36 pi4blue k3s[967]: W1208 19:21:36.219872     967 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Dec 08 19:21:36 pi4blue k3s[967]: W1208 19:21:36.219998     967 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Dec 08 19:21:36 pi4blue k3s[967]: W1208 19:21:36.220103     967 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Dec 08 19:21:36 pi4blue k3s[967]: E1208 19:21:36.220147     967 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0

Logs master journalctl -u k3s. There are no logs thrown in the worker.

You need to add --no-pager to remove the stupid line cropping, journalctl -u k3s --no-pager -f

-f does follow so you get the last lines as they come (keeps reading logs)

@brandond thanks for info, I'm going to try with one faster card now. this time k3s-server started but rpi run out of memory on sudo kubectl get nodes it's completely stuck I had to unplug from power.

Thanks @agilob! I edited the last post

There's still not really anything there on the server - can you do journalctl --no-pager -u k3s and grab the last day or so of logs? Attach them to your comment instead of just pasting them inline on your comment.

On the agent you need to do journalctl --no-pager -u k3s-agent since agents have a different unit name.

@brandond I think this is it. For master I have reduced a big portion from logs. Also, to get these logs I had to turn on again the cluster and I had no issues on this run, both nodes were running fine from the start...


Master

Starting Lightweight Kubernetes...
time="2020-12-08T20:00:41.459567374Z" level=info msg="Starting k3s v1.19.4+k3s1 (2532c10f)"
time="2020-12-08T20:00:41.465271146Z" level=info msg="Cluster bootstrap already complete"
time="2020-12-08T20:00:41.626080500Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2020-12-08T20:00:41.626319513Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2020-12-08T20:00:41.626927093Z" level=info msg="Database tables and indexes are up to date"
time="2020-12-08T20:00:41.654837355Z" level=info msg="Kine listening on unix://kine.sock"
time="2020-12-08T20:00:41.655609543Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I1208 20:00:41.663871     383 server.go:652] external host was not specified, using 192.168.1.142
I1208 20:00:41.671031     383 server.go:177] Version: v1.19.4+k3s1
I1208 20:00:41.832164     383 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1208 20:00:41.832381     383 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1208 20:00:41.844766     383 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1208 20:00:41.845191     383 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1208 20:00:42.083048     383 master.go:271] Using reconciler: lease
I1208 20:00:42.711343     383 trace.go:205] Trace[842076244]: "List etcd3" key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (08-Dec-2020 20:00:41.916) (total time: 794ms):
Trace[842076244]: [794.311299ms] [794.311299ms] END
I1208 20:00:42.767088     383 trace.go:205] Trace[541992832]: "List etcd3" key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (08-Dec-2020 20:00:41.910) (total time: 856ms):
Trace[541992832]: [856.030349ms] [856.030349ms] END
I1208 20:00:42.810764     383 trace.go:205] Trace[201699218]: "List etcd3" key:/configmaps,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (08-Dec-2020 20:00:42.245) (total time: 565ms):
Trace[201699218]: [565.589986ms] [565.589986ms] END
W1208 20:00:43.720808     383 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources.
time="2020-12-08T20:00:43.772279082Z" level=info msg="Cluster-Http-Server 2020/12/08 20:00:43 http: TLS handshake error from 192.168.1.143:56478: remote error: tls: bad certificate"
W1208 20:00:43.781152     383 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W1208 20:00:43.849868     383 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1208 20:00:43.925990     383 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1208 20:00:43.940230     383 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1208 20:00:44.001034     383 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1208 20:00:44.082749     383 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
W1208 20:00:44.082800     383 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
I1208 20:00:44.158926     383 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1208 20:00:44.159002     383 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2020-12-08T20:00:44.220461603Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
I1208 20:00:44.221513     383 registry.go:173] Registering SelectorSpread plugin
I1208 20:00:44.221596     383 registry.go:173] Registering SelectorSpread plugin
time="2020-12-08T20:00:44.223230509Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2020-12-08T20:00:44.224873623Z" level=info msg="Waiting for API server to become available"
time="2020-12-08T20:00:44.231508784Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2020-12-08T20:00:44.232303193Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.1.142:6443 -t ${NODE_TOKEN}"
time="2020-12-08T20:00:44.238250701Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2020-12-08T20:00:44.238344866Z" level=info msg="Run: k3s kubectl"
time="2020-12-08T20:00:44.239004352Z" level=info msg="Module overlay was already loaded"
time="2020-12-08T20:00:44.239078017Z" level=info msg="Module nf_conntrack was already loaded"
time="2020-12-08T20:00:44.239119646Z" level=info msg="Module br_netfilter was already loaded"
time="2020-12-08T20:00:44.239158257Z" level=info msg="Module iptable_nat was already loaded"
time="2020-12-08T20:00:44.478099046Z" level=info msg="Cluster-Http-Server 2020/12/08 20:00:44 http: TLS handshake error from 127.0.0.1:36566: remote error: tls: bad certificate"
time="2020-12-08T20:00:44.531092831Z" level=info msg="Cluster-Http-Server 2020/12/08 20:00:44 http: TLS handshake error from 127.0.0.1:36574: remote error: tls: bad certificate"
time="2020-12-08T20:00:44.640570439Z" level=info msg="certificate CN=pi4blue signed by CN=k3s-server-ca@1607444507: notBefore=2020-12-08 16:21:47 +0000 UTC notAfter=2021-12-08 20:00:44 +0000 UTC"
time="2020-12-08T20:00:44.666837123Z" level=info msg="certificate CN=system:node:pi4blue,O=system:nodes signed by CN=k3s-client-ca@1607444507: notBefore=2020-12-08 16:21:47 +0000 UTC notAfter=2021-12-08 20:00:44 +0000 UTC"
time="2020-12-08T20:00:44.717752747Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2020-12-08T20:00:44.718270699Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2020-12-08T20:00:45.729418834Z" level=info msg="Waiting for containerd startup: rpc error: code = Unknown desc = server is not initialized yet"
time="2020-12-08T20:00:45.800686503Z" level=info msg="Cluster-Http-Server 2020/12/08 20:00:45 http: TLS handshake error from 192.168.1.143:56486: remote error: tls: bad certificate"
time="2020-12-08T20:00:45.831412244Z" level=info msg="Cluster-Http-Server 2020/12/08 20:00:45 http: TLS handshake error from 192.168.1.143:56494: remote error: tls: bad certificate"
time="2020-12-08T20:00:46.736625641Z" level=info msg="Containerd is now running"
time="2020-12-08T20:00:46.804881965Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2020-12-08T20:00:46.842689950Z" level=info msg="Handling backend connection request [pi4blue]"
time="2020-12-08T20:00:46.844590578Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2020-12-08T20:00:46.845167362Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/b46300d70fe21c458e9a951f12a5c6dd86eb7cf2d0b213bb9ad07dbad435207e/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=pi4blue --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/system.slice --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/system.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2020-12-08T20:00:46.847654607Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=pi4blue --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1208 20:00:46.874251     383 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
E1208 20:00:46.941446     383 node.go:125] Failed to retrieve node info: nodes "pi4blue" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
time="2020-12-08T20:00:46.946675875Z" level=info msg="Node CIDR assigned for: pi4blue"
I1208 20:00:46.947157     383 flannel.go:92] Determining IP address of default interface
I1208 20:00:46.962681     383 flannel.go:105] Using interface with name eth0 and address 192.168.1.142
time="2020-12-08T20:00:46.967559373Z" level=info msg="labels have already set on node: pi4blue"
I1208 20:00:46.982184     383 kube.go:300] Starting kube subnet manager
I1208 20:00:46.982310     383 kube.go:117] Waiting 10m0s for node controller to sync
I1208 20:00:46.985333     383 server.go:407] Version: v1.19.4+k3s1
I1208 20:00:47.166550     383 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
W1208 20:00:47.377084     383 nvidia.go:61] NVIDIA GPU metrics will not be available: no NVIDIA devices found
W1208 20:00:47.379573     383 sysinfo.go:203] Nodes topology is not available, providing CPU topology
W1208 20:00:47.380539     383 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
W1208 20:00:47.380985     383 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory
W1208 20:00:47.381337     383 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory
W1208 20:00:47.381670     383 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory
W1208 20:00:47.382322     383 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
W1208 20:00:47.382680     383 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
W1208 20:00:47.382985     383 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
W1208 20:00:47.383299     383 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
E1208 20:00:47.383585     383 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0
W1208 20:00:47.384105     383 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
W1208 20:00:47.384427     383 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
W1208 20:00:47.384723     383 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
W1208 20:00:47.385009     383 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
E1208 20:00:47.385266     383 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0
I1208 20:00:47.447619     383 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I1208 20:00:47.448524     383 container_manager_linux.go:289] container manager verified user specified cgroup-root exists: []
I1208 20:00:47.448579     383 container_manager_linux.go:294] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
I1208 20:00:47.448923     383 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
I1208 20:00:47.448958     383 container_manager_linux.go:324] [topologymanager] Initializing Topology Manager with none policy
I1208 20:00:47.448978     383 container_manager_linux.go:329] Creating device plugin manager: true
W1208 20:00:47.450321     383 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
W1208 20:00:47.450704     383 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
I1208 20:00:47.454916     383 kubelet.go:261] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
I1208 20:00:47.455027     383 kubelet.go:273] Watching apiserver
I1208 20:00:47.505578     383 kuberuntime_manager.go:214] Container runtime containerd initialized, version: v1.4.1-k3s1, apiVersion: v1alpha2
I1208 20:00:47.512804     383 server.go:1148] Started kubelet
I1208 20:00:47.530007     383 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I1208 20:00:47.547403     383 server.go:152] Starting to listen on 0.0.0.0:10250
I1208 20:00:47.552273     383 server.go:424] Adding debug handlers to kubelet server.
I1208 20:00:47.555727     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 727275372759ff8d065346fe24adb4d5b4a7c0d7af4f32f397f941112b3ae9b4
I1208 20:00:47.556501     383 volume_manager.go:265] Starting Kubelet Volume Manager
I1208 20:00:47.556697     383 desired_state_of_world_populator.go:139] Desired state populator starts to run
E1208 20:00:47.549678     383 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E1208 20:00:47.596646     383 kubelet.go:1218] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I1208 20:00:47.674569     383 kuberuntime_manager.go:992] updating runtime config through cri with podcidr 10.42.0.0/24
W1208 20:00:47.685721     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/\""
I1208 20:00:47.707740     383 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.0.0/24
I1208 20:00:47.711966     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5df56753c95197f4667afd97c992d8fbe214870a4ee4e6dd662b1a3757ec9deb
I1208 20:00:47.770088     383 kubelet_node_status.go:70] Attempting to register node pi4blue
I1208 20:00:47.787740     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6a8d42cb4b558e914a540c77a74e2e8e0b2ae9750ab480ded73deb3ddb8d6c9b
time="2020-12-08T20:00:47.855728712Z" level=info msg="Cluster-Http-Server 2020/12/08 20:00:47 http: TLS handshake error from 192.168.1.143:56498: remote error: tls: bad certificate"
time="2020-12-08T20:00:47.879283322Z" level=info msg="Cluster-Http-Server 2020/12/08 20:00:47 http: TLS handshake error from 192.168.1.143:56502: remote error: tls: bad certificate"
I1208 20:00:47.885833     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 3c10612bd39c031110b0e52ec25c87b1fb7c1229dd71ab821924fe0b0fdb25b0
I1208 20:00:47.907383     383 cpu_manager.go:184] [cpumanager] starting with none policy
I1208 20:00:47.907465     383 cpu_manager.go:185] [cpumanager] reconciling every 10s
I1208 20:00:47.907536     383 state_mem.go:36] [cpumanager] initializing new in-memory state store
I1208 20:00:47.921483     383 state_mem.go:88] [cpumanager] updated default cpuset: ""
I1208 20:00:47.922053     383 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
I1208 20:00:47.922564     383 policy_none.go:43] [cpumanager] none policy: Start
W1208 20:00:47.928584     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods\""
I1208 20:00:47.929008     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c7ea1a277e47970e00ad6100d5d90843be6dfdc167163e861c9c4fce3a575b4e
W1208 20:00:47.935402     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable\""
W1208 20:00:47.939028     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort\""
I1208 20:00:47.959903     383 plugin_manager.go:114] Starting Kubelet Plugin Manager
W1208 20:00:47.963558     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/systemd/system.slice\""
I1208 20:00:47.968732     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 88a4bb391bc59b36f42e5959e1d681bf369d2957fa6d3bfbc8d25a158a32de1b
I1208 20:00:47.986462     383 kube.go:124] Node controller sync successful
I1208 20:00:47.987520     383 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I1208 20:00:48.022929     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: d61056a7aab3de373be14c417022e240fc476e553fbf2cea659107d6b4fe57f6
E1208 20:00:48.058870     383 node.go:125] Failed to retrieve node info: nodes "pi4blue" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I1208 20:00:48.089590     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: dc9a2432bb63e5b3f4f6df0cd2f22061c4fb738e63008ab802b7506ff763589e
I1208 20:00:48.145875     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1ddfdb5c93bd3cd9a72458d205c7203fe2321c441010561e51af5201a4ac2ca5
I1208 20:00:48.174415     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 32e8a6023c5c90222d0f7428752393afaca98ab917927ad4a17d18ad1f05a5e4
I1208 20:00:48.202821     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5d9de322e4ef5c8826752c36860423d4d86ce5b0f27891ddc058cc5a4dbadaba
I1208 20:00:48.245911     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f946aa031d2c073ff74ce8933a8b1c267ec10d6a80684bdafa8e3ba23cec8561
I1208 20:00:48.293616     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4eeea2198f9bc4761d590c1067f146140a692b3e34173e7b1bfc31dc944dc61e
I1208 20:00:48.309208     383 status_manager.go:158] Starting to sync pod status with apiserver
I1208 20:00:48.319420     383 kubelet.go:1741] Starting kubelet main sync loop.
E1208 20:00:48.320549     383 kubelet.go:1765] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful
I1208 20:00:48.325353     383 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 8bd1ef0e7a026ec2441b49cb2f91ca0dab95e519f23a827d263a62a8bea4d04c
I1208 20:00:48.422122     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
I1208 20:00:48.473880     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
E1208 20:00:48.483629     383 reflector.go:127] object-"kube-system"/"metrics-server-token-6gjp9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6gjp9" is forbidden: User "system:node:pi4blue" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'pi4blue' and this object
E1208 20:00:48.511090     383 reflector.go:127] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:pi4blue" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pi4blue' and this object
E1208 20:00:48.518854     383 reflector.go:127] object-"kube-system"/"coredns-token-sxq67": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-sxq67" is forbidden: User "system:node:pi4blue" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'pi4blue' and this object
I1208 20:00:48.529485     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
E1208 20:00:48.543930     383 reflector.go:127] object-"kube-system"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:pi4blue" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pi4blue' and this object
E1208 20:00:48.544604     383 reflector.go:127] object-"kube-system"/"local-path-provisioner-service-account-token-wkmr2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "local-path-provisioner-service-account-token-wkmr2" is forbidden: User "system:node:pi4blue" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'pi4blue' and this object
I1208 20:00:48.556994     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/d0863078-94a1-40fb-9f50-44170f25dd25-tmp-dir") pod "metrics-server-7b4f8b595-7wvkr" (UID: "d0863078-94a1-40fb-9f50-44170f25dd25")
I1208 20:00:48.557100     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-6gjp9" (UniqueName: "kubernetes.io/secret/d0863078-94a1-40fb-9f50-44170f25dd25-metrics-server-token-6gjp9") pod "metrics-server-7b4f8b595-7wvkr" (UID: "d0863078-94a1-40fb-9f50-44170f25dd25")
I1208 20:00:48.557185     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a01d2cc3-caac-4863-b936-63352516cf0b-config-volume") pod "coredns-66c464876b-zw8rc" (UID: "a01d2cc3-caac-4863-b936-63352516cf0b")
I1208 20:00:48.557268     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sxq67" (UniqueName: "kubernetes.io/secret/a01d2cc3-caac-4863-b936-63352516cf0b-coredns-token-sxq67") pod "coredns-66c464876b-zw8rc" (UID: "a01d2cc3-caac-4863-b936-63352516cf0b")
I1208 20:00:48.560699     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
W1208 20:00:48.566282     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod1e7d62e4-a24f-415b-85ae-0e89586e3fcc\""
I1208 20:00:48.570105     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
E1208 20:00:48.577018     383 reflector.go:127] object-"kube-system"/"traefik": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "traefik" is forbidden: User "system:node:pi4blue" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pi4blue' and this object
E1208 20:00:48.583657     383 reflector.go:127] object-"kube-system"/"traefik-default-cert": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "traefik-default-cert" is forbidden: User "system:node:pi4blue" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'pi4blue' and this object
I1208 20:00:48.589564     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
W1208 20:00:48.596170     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/podf480099c-44c2-4ada-903c-8b045ea3108e\""
I1208 20:00:48.609012     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
E1208 20:00:48.616643     383 cgroup_manager_linux.go:698] cgroup update failed failed to set supported cgroup subsystems for cgroup [kubepods burstable podb75f953b-fbde-471b-8a25-1358093ceade]: failed to set config for supported subsystems : failed to write "100000" to "/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podb75f953b-fbde-471b-8a25-1358093ceade/cpu.cfs_period_us": open /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podb75f953b-fbde-471b-8a25-1358093ceade/cpu.cfs_period_us: permission denied
W1208 20:00:48.617637     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod1ffdff58-431c-4926-a3f9-a6c0791ce502\""
W1208 20:00:48.619604     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podb75f953b-fbde-471b-8a25-1358093ceade\""
I1208 20:00:48.631876     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
W1208 20:00:48.632094     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/podd0863078-94a1-40fb-9f50-44170f25dd25\""
E1208 20:00:48.640499     383 cgroup_manager_linux.go:698] cgroup update failed failed to set supported cgroup subsystems for cgroup [kubepods burstable podf6c8a21f-6d4f-4b51-987b-9e3a5176e07e]: failed to set config for supported subsystems : failed to write "100000" to "/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podf6c8a21f-6d4f-4b51-987b-9e3a5176e07e/cpu.cfs_period_us": open /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podf6c8a21f-6d4f-4b51-987b-9e3a5176e07e/cpu.cfs_period_us: permission denied
W1208 20:00:48.642708     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podf6c8a21f-6d4f-4b51-987b-9e3a5176e07e\""
W1208 20:00:48.644659     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/poda01d2cc3-caac-4863-b936-63352516cf0b\""
I1208 20:00:48.646387     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
W1208 20:00:48.652656     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod57c0e206-b7b6-4adc-a553-3bdf14888b23\""
I1208 20:00:48.657687     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/1ffdff58-431c-4926-a3f9-a6c0791ce502-config") pod "traefik-5dd496474-62jf8" (UID: "1ffdff58-431c-4926-a3f9-a6c0791ce502")
I1208 20:00:48.658321     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ssl" (UniqueName: "kubernetes.io/secret/1ffdff58-431c-4926-a3f9-a6c0791ce502-ssl") pod "traefik-5dd496474-62jf8" (UID: "1ffdff58-431c-4926-a3f9-a6c0791ce502")
I1208 20:00:48.658405     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-v9pqp" (UniqueName: "kubernetes.io/secret/f480099c-44c2-4ada-903c-8b045ea3108e-default-token-v9pqp") pod "svclb-traefik-5gxj8" (UID: "f480099c-44c2-4ada-903c-8b045ea3108e")
I1208 20:00:48.658550     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1e7d62e4-a24f-415b-85ae-0e89586e3fcc-config-volume") pod "local-path-provisioner-7ff9579c6-q96pm" (UID: "1e7d62e4-a24f-415b-85ae-0e89586e3fcc")
I1208 20:00:48.658636     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-path-provisioner-service-account-token-wkmr2" (UniqueName: "kubernetes.io/secret/1e7d62e4-a24f-415b-85ae-0e89586e3fcc-local-path-provisioner-service-account-token-wkmr2") pod "local-path-provisioner-7ff9579c6-q96pm" (UID: "1e7d62e4-a24f-415b-85ae-0e89586e3fcc")
I1208 20:00:48.658730     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "traefik-token-sl8r9" (UniqueName: "kubernetes.io/secret/1ffdff58-431c-4926-a3f9-a6c0791ce502-traefik-token-sl8r9") pod "traefik-5dd496474-62jf8" (UID: "1ffdff58-431c-4926-a3f9-a6c0791ce502")
I1208 20:00:48.658792     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "arm-exporter-token-9txj8" (UniqueName: "kubernetes.io/secret/b75f953b-fbde-471b-8a25-1358093ceade-arm-exporter-token-9txj8") pod "arm-exporter-6cpf8" (UID: "b75f953b-fbde-471b-8a25-1358093ceade")
I1208 20:00:48.677332     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
E1208 20:00:48.680251     383 reflector.go:127] object-"kube-system"/"traefik-token-sl8r9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "traefik-token-sl8r9" is forbidden: User "system:node:pi4blue" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'pi4blue' and this object
W1208 20:00:48.681668     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77\""
I1208 20:00:48.727946     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
E1208 20:00:48.735632     383 cgroup_manager_linux.go:698] cgroup update failed failed to set supported cgroup subsystems for cgroup [kubepods burstable podb7d55518-1bbc-44aa-809b-68614c48868a]: failed to set config for supported subsystems : failed to write "100000" to "/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podb7d55518-1bbc-44aa-809b-68614c48868a/cpu.cfs_period_us": open /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podb7d55518-1bbc-44aa-809b-68614c48868a/cpu.cfs_period_us: permission denied
W1208 20:00:48.737362     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podb7d55518-1bbc-44aa-809b-68614c48868a\""
I1208 20:00:48.759263     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-node-rsrc-use" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-node-rsrc-use") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.760249     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "node-exporter-token-svqfn" (UniqueName: "kubernetes.io/secret/f6c8a21f-6d4f-4b51-987b-9e3a5176e07e-node-exporter-token-svqfn") pod "node-exporter-xw9l8" (UID: "f6c8a21f-6d4f-4b51-987b-9e3a5176e07e")
I1208 20:00:48.760932     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-cluster-total" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-cluster-total") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.761718     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-persistentvolumesusage" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-persistentvolumesusage") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.762442     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-namespace-by-workload" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-namespace-by-workload") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.763196     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-statefulset" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-statefulset") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.763958     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/secret/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-config") pod "prometheus-k8s-0" (UID: "259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77")
I1208 20:00:48.764642     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-apiserver" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-apiserver") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.765241     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77")
I1208 20:00:48.765991     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "prometheus-k8s-db" (UniqueName: "kubernetes.io/empty-dir/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-prometheus-k8s-db") pod "prometheus-k8s-0" (UID: "259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77")
I1208 20:00:48.766720     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-k8s-resources-cluster" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-cluster") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.767344     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-k8s-resources-workload" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-workload") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.768022     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-prometheus-dashboard" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-prometheus-dashboard") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.768586     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-proxy" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-proxy") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.769524     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-traefik-dashboard" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-traefik-dashboard") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.770211     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-workload-total" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-workload-total") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.771714     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboards" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboards") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.772313     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-prometheus" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-prometheus") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.772903     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-out" (UniqueName: "kubernetes.io/empty-dir/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-config-out") pod "prometheus-k8s-0" (UID: "259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77")
I1208 20:00:48.773414     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-datasources" (UniqueName: "kubernetes.io/secret/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-datasources") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.773574     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "proc" (UniqueName: "kubernetes.io/host-path/f6c8a21f-6d4f-4b51-987b-9e3a5176e07e-proc") pod "node-exporter-xw9l8" (UID: "f6c8a21f-6d4f-4b51-987b-9e3a5176e07e")
I1208 20:00:48.774340     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "volume-serving-cert" (UniqueName: "kubernetes.io/empty-dir/57c0e206-b7b6-4adc-a553-3bdf14888b23-volume-serving-cert") pod "prometheus-adapter-585b57857b-s4t9n" (UID: "57c0e206-b7b6-4adc-a553-3bdf14888b23")
I1208 20:00:48.774463     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-kubelet" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-kubelet") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.775656     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-coredns-dashboard" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-coredns-dashboard") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.776237     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-node-cluster-rsrc-use" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-node-cluster-rsrc-use") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.776761     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-nodes" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-nodes") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.777340     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-config" (UniqueName: "kubernetes.io/secret/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-config") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.777824     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "prometheus-adapter-token-v6b7x" (UniqueName: "kubernetes.io/secret/57c0e206-b7b6-4adc-a553-3bdf14888b23-prometheus-adapter-token-v6b7x") pod "prometheus-adapter-585b57857b-s4t9n" (UID: "57c0e206-b7b6-4adc-a553-3bdf14888b23")
I1208 20:00:48.778635     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-kubernetes-cluster-dashboard" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-kubernetes-cluster-dashboard") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.779203     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-k8s-resources-namespace" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-namespace") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.779937     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-k8s-resources-workloads-namespace" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-workloads-namespace") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.780609     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmpfs" (UniqueName: "kubernetes.io/empty-dir/57c0e206-b7b6-4adc-a553-3bdf14888b23-tmpfs") pod "prometheus-adapter-585b57857b-s4t9n" (UID: "57c0e206-b7b6-4adc-a553-3bdf14888b23")
I1208 20:00:48.781156     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/57c0e206-b7b6-4adc-a553-3bdf14888b23-config") pod "prometheus-adapter-585b57857b-s4t9n" (UID: "57c0e206-b7b6-4adc-a553-3bdf14888b23")
I1208 20:00:48.782261     383 topology_manager.go:233] [topologymanager] Topology Admit Handler
I1208 20:00:48.783251     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-namespace-by-pod" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-namespace-by-pod") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.783796     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-token-h4pjq" (UniqueName: "kubernetes.io/secret/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-token-h4pjq") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.784248     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-k8s-resources-node" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-node") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.784696     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-pod-total" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-pod-total") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.785235     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "root" (UniqueName: "kubernetes.io/host-path/f6c8a21f-6d4f-4b51-987b-9e3a5176e07e-root") pod "node-exporter-xw9l8" (UID: "f6c8a21f-6d4f-4b51-987b-9e3a5176e07e")
I1208 20:00:48.785349     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tls-assets" (UniqueName: "kubernetes.io/secret/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-tls-assets") pod "prometheus-k8s-0" (UID: "259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77")
I1208 20:00:48.785498     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-prometheus-remote-write" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-prometheus-remote-write") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.785579     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-scheduler" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-scheduler") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.785717     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sys" (UniqueName: "kubernetes.io/host-path/f6c8a21f-6d4f-4b51-987b-9e3a5176e07e-sys") pod "node-exporter-xw9l8" (UID: "f6c8a21f-6d4f-4b51-987b-9e3a5176e07e")
I1208 20:00:48.785876     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-storage" (UniqueName: "kubernetes.io/empty-dir/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-storage") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.785980     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-controller-manager" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-controller-manager") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.786047     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-dashboard-k8s-resources-pod" (UniqueName: "kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-pod") pod "grafana-7cccfc9b5f-l2crq" (UID: "b7d55518-1bbc-44aa-809b-68614c48868a")
I1208 20:00:48.786159     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "prometheus-k8s-token-zhgc4" (UniqueName: "kubernetes.io/secret/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-prometheus-k8s-token-zhgc4") pod "prometheus-k8s-0" (UID: "259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77")
W1208 20:00:48.789377     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod072a9c2f-9d13-47c1-8d77-d2fe3544912c\""
W1208 20:00:48.807012     383 pod_container_deletor.go:79] Container "61be21f1495aa96e00b2fb6cdfbbeb252930e94d0ef3f2c2cfdd395d9f047a6e" not found in pod's containers
W1208 20:00:48.807166     383 pod_container_deletor.go:79] Container "b066fa7d34f080a77222b78c2513d4dbd5a08475e069e556dbf9e368fb133d82" not found in pod's containers
W1208 20:00:48.807476     383 pod_container_deletor.go:79] Container "965f55daa28ec0483c00cb2e97a4be3f5bd15bddc6b2477c56699b79f9ff28df" not found in pod's containers
W1208 20:00:48.808018     383 pod_container_deletor.go:79] Container "917e83c710a0c112de62b2a94f671fe36286a30077fdd2551f79f3629e2e11da" not found in pod's containers
W1208 20:00:48.808132     383 pod_container_deletor.go:79] Container "8bd1ef0e7a026ec2441b49cb2f91ca0dab95e519f23a827d263a62a8bea4d04c" not found in pod's containers
W1208 20:00:48.808546     383 pod_container_deletor.go:79] Container "18765ec520bd54f1660b0937751e57329fab74950ec771bdb7cbae649f0dc657" not found in pod's containers
W1208 20:00:48.808657     383 pod_container_deletor.go:79] Container "a18624d0e657d059bd8252dd0f3cd4ffebbcca4b9b4dfb043f68390f8848cce5" not found in pod's containers
W1208 20:00:48.809119     383 pod_container_deletor.go:79] Container "d6d1acce9c598aa8ec871a15389ad5c1e1e710075b87b83ae0946d81f8ede003" not found in pod's containers
W1208 20:00:48.809249     383 pod_container_deletor.go:79] Container "1f592cc6a5a765b4332bc70940aa71d554020fd84cbf5a00e09751832f70d888" not found in pod's containers
W1208 20:00:48.809491     383 pod_container_deletor.go:79] Container "ac2eab9abd3c8e8871341b73f03744568781a776e96a95f4eb48588619b5ba34" not found in pod's containers
W1208 20:00:48.809596     383 pod_container_deletor.go:79] Container "bef5f72f92606aa3ceae737c24171aa7d239788591df98a29fea1aaa35d02f99" not found in pod's containers
W1208 20:00:48.809961     383 pod_container_deletor.go:79] Container "aca1c2670dd1ccdcba8f722eff9940bcc448864581676c59291c25e49ba7451f" not found in pod's containers
W1208 20:00:48.810068     383 pod_container_deletor.go:79] Container "c2a9d29f061b27cb6140dce4d5f7c64e39e61b82ff56162f5cab146d8f7c1740" not found in pod's containers
W1208 20:00:48.810615     383 pod_container_deletor.go:79] Container "64499662481901a4b9d8721ffa447ad15a6dfc084ff9ac709f5fdd595b2c742b" not found in pod's containers
W1208 20:00:48.810745     383 pod_container_deletor.go:79] Container "649f054bb8b41346cd4eb17801e2d813abddd730f28f030a2fca2b1850338e77" not found in pod's containers
W1208 20:00:48.811144     383 pod_container_deletor.go:79] Container "df0b6773ba0ae667b12e1fc124382e6598799053b4024677a6be147859384659" not found in pod's containers
W1208 20:00:48.811339     383 pod_container_deletor.go:79] Container "3893ee82d87708447f2fa7a3115281c25babb0c118a7d06300c3c42a6ce47b4f" not found in pod's containers
W1208 20:00:48.811504     383 pod_container_deletor.go:79] Container "f0c2dc32ae6678882a8c09627a5b946db0382895475ce0b7e4176d7563fd727e" not found in pod's containers
W1208 20:00:48.811711     383 pod_container_deletor.go:79] Container "62314d87dd79328c6d73aff0261796a06f27cf2b48124df569246bfb23fa6aa5" not found in pod's containers
W1208 20:00:48.811949     383 pod_container_deletor.go:79] Container "31016e7256394d7ebf7fe40b0e76239c174c25da358f07bbb4c974eddf3beda5" not found in pod's containers
W1208 20:00:48.812103     383 pod_container_deletor.go:79] Container "4850a9fd897cbf0b03b161a3abe3dca614f42af7dca90750a292e945b7480aa9" not found in pod's containers
W1208 20:00:48.812272     383 pod_container_deletor.go:79] Container "10dc3a1361a59f9432d3544f2beeed65c028f68883d0eee1588cdecc382802b2" not found in pod's containers
W1208 20:00:48.812434     383 pod_container_deletor.go:79] Container "e0d77608cee1103a365b0952d11ec90c452db4735d76c9b2e7a53ce2b4169a57" not found in pod's containers
W1208 20:00:48.821990     383 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pode27ac682-b971-4edd-a9c0-81554fb77008\""
I1208 20:00:48.888994     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-state-metrics-token-9fhbs" (UniqueName: "kubernetes.io/secret/072a9c2f-9d13-47c1-8d77-d2fe3544912c-kube-state-metrics-token-9fhbs") pod "kube-state-metrics-6cb6df5d4-mlcng" (UID: "072a9c2f-9d13-47c1-8d77-d2fe3544912c")
I1208 20:00:48.889208     383 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "prometheus-operator-token-rqxtx" (UniqueName: "kubernetes.io/secret/e27ac682-b971-4edd-a9c0-81554fb77008-prometheus-operator-token-rqxtx") pod "prometheus-operator-67755f959-vvmqz" (UID: "e27ac682-b971-4edd-a9c0-81554fb77008")
I1208 20:00:48.993304     383 reconciler.go:157] Reconciler: start to sync state
E1208 20:00:49.079078     383 reflector.go:127] object-"kube-system"/"default-token-v9pqp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-v9pqp" is forbidden: User "system:node:pi4blue" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'pi4blue' and this object
E1208 20:00:49.279079     383 reflector.go:127] object-"monitoring"/"arm-exporter-token-9txj8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "arm-exporter-token-9txj8" is forbidden: User "system:node:pi4blue" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4blue' and this object
E1208 20:00:49.481460     383 reflector.go:127] object-"monitoring"/"node-exporter-token-svqfn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-exporter-token-svqfn" is forbidden: User "system:node:pi4blue" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4blue' and this object
E1208 20:00:49.662081     383 secret.go:195] Couldn't get secret kube-system/metrics-server-token-6gjp9: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.662196     383 secret.go:195] Couldn't get secret kube-system/coredns-token-sxq67: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.662319     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/d0863078-94a1-40fb-9f50-44170f25dd25-metrics-server-token-6gjp9 podName:d0863078-94a1-40fb-9f50-44170f25dd25 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.162241291 +0000 UTC m=+37.519693189 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"metrics-server-token-6gjp9\" (UniqueName: \"kubernetes.io/secret/d0863078-94a1-40fb-9f50-44170f25dd25-metrics-server-token-6gjp9\") pod \"metrics-server-7b4f8b595-7wvkr\" (UID: \"d0863078-94a1-40fb-9f50-44170f25dd25\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.662397     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/a01d2cc3-caac-4863-b936-63352516cf0b-coredns-token-sxq67 podName:a01d2cc3-caac-4863-b936-63352516cf0b nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.162346585 +0000 UTC m=+37.519798465 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-sxq67\" (UniqueName: \"kubernetes.io/secret/a01d2cc3-caac-4863-b936-63352516cf0b-coredns-token-sxq67\") pod \"coredns-66c464876b-zw8rc\" (UID: \"a01d2cc3-caac-4863-b936-63352516cf0b\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.663530     383 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.663742     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/a01d2cc3-caac-4863-b936-63352516cf0b-config-volume podName:a01d2cc3-caac-4863-b936-63352516cf0b nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.163665077 +0000 UTC m=+37.521117012 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a01d2cc3-caac-4863-b936-63352516cf0b-config-volume\") pod \"coredns-66c464876b-zw8rc\" (UID: \"a01d2cc3-caac-4863-b936-63352516cf0b\") : failed to sync configmap cache: timed out waiting for the condition"
I1208 20:00:49.664697     383 request.go:645] Throttling request took 1.029728476s, request: GET:https://127.0.0.1:6443/api/v1/namespaces/monitoring/secrets?fieldSelector=metadata.name%3Dprometheus-adapter-token-v6b7x&limit=500&resourceVersion=0
E1208 20:00:49.678180     383 reflector.go:127] object-"monitoring"/"prometheus-adapter-token-v6b7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "prometheus-adapter-token-v6b7x" is forbidden: User "system:node:pi4blue" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4blue' and this object
E1208 20:00:49.770155     383 secret.go:195] Couldn't get secret monitoring/arm-exporter-token-9txj8: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.770232     383 configmap.go:200] Couldn't get configMap kube-system/local-path-config: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.770379     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/b75f953b-fbde-471b-8a25-1358093ceade-arm-exporter-token-9txj8 podName:b75f953b-fbde-471b-8a25-1358093ceade nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.270309873 +0000 UTC m=+37.627761771 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"arm-exporter-token-9txj8\" (UniqueName: \"kubernetes.io/secret/b75f953b-fbde-471b-8a25-1358093ceade-arm-exporter-token-9txj8\") pod \"arm-exporter-6cpf8\" (UID: \"b75f953b-fbde-471b-8a25-1358093ceade\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.770461     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/1e7d62e4-a24f-415b-85ae-0e89586e3fcc-config-volume podName:1e7d62e4-a24f-415b-85ae-0e89586e3fcc nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.270409408 +0000 UTC m=+37.627861288 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7d62e4-a24f-415b-85ae-0e89586e3fcc-config-volume\") pod \"local-path-provisioner-7ff9579c6-q96pm\" (UID: \"1e7d62e4-a24f-415b-85ae-0e89586e3fcc\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.775677     383 secret.go:195] Couldn't get secret kube-system/local-path-provisioner-service-account-token-wkmr2: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.775912     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/1e7d62e4-a24f-415b-85ae-0e89586e3fcc-local-path-provisioner-service-account-token-wkmr2 podName:1e7d62e4-a24f-415b-85ae-0e89586e3fcc nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.275816871 +0000 UTC m=+37.633268788 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"local-path-provisioner-service-account-token-wkmr2\" (UniqueName: \"kubernetes.io/secret/1e7d62e4-a24f-415b-85ae-0e89586e3fcc-local-path-provisioner-service-account-token-wkmr2\") pod \"local-path-provisioner-7ff9579c6-q96pm\" (UID: \"1e7d62e4-a24f-415b-85ae-0e89586e3fcc\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.779242     383 configmap.go:200] Couldn't get configMap kube-system/traefik: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.779561     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/1ffdff58-431c-4926-a3f9-a6c0791ce502-config podName:1ffdff58-431c-4926-a3f9-a6c0791ce502 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.279387076 +0000 UTC m=+37.636839030 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffdff58-431c-4926-a3f9-a6c0791ce502-config\") pod \"traefik-5dd496474-62jf8\" (UID: \"1ffdff58-431c-4926-a3f9-a6c0791ce502\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.787029     383 secret.go:195] Couldn't get secret kube-system/traefik-default-cert: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.787201     383 secret.go:195] Couldn't get secret kube-system/default-token-v9pqp: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.787244     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/1ffdff58-431c-4926-a3f9-a6c0791ce502-ssl podName:1ffdff58-431c-4926-a3f9-a6c0791ce502 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.287172917 +0000 UTC m=+37.644624852 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"ssl\" (UniqueName: \"kubernetes.io/secret/1ffdff58-431c-4926-a3f9-a6c0791ce502-ssl\") pod \"traefik-5dd496474-62jf8\" (UID: \"1ffdff58-431c-4926-a3f9-a6c0791ce502\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.787481     383 secret.go:195] Couldn't get secret kube-system/traefik-token-sl8r9: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.787499     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/f480099c-44c2-4ada-903c-8b045ea3108e-default-token-v9pqp podName:f480099c-44c2-4ada-903c-8b045ea3108e nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.287431059 +0000 UTC m=+37.644882957 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-v9pqp\" (UniqueName: \"kubernetes.io/secret/f480099c-44c2-4ada-903c-8b045ea3108e-default-token-v9pqp\") pod \"svclb-traefik-5gxj8\" (UID: \"f480099c-44c2-4ada-903c-8b045ea3108e\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.787645     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/1ffdff58-431c-4926-a3f9-a6c0791ce502-traefik-token-sl8r9 podName:1ffdff58-431c-4926-a3f9-a6c0791ce502 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.287589352 +0000 UTC m=+37.645041325 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"traefik-token-sl8r9\" (UniqueName: \"kubernetes.io/secret/1ffdff58-431c-4926-a3f9-a6c0791ce502-traefik-token-sl8r9\") pod \"traefik-5dd496474-62jf8\" (UID: \"1ffdff58-431c-4926-a3f9-a6c0791ce502\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.878846     383 reflector.go:127] object-"monitoring"/"adapter-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "adapter-config" is forbidden: User "system:node:pi4blue" cannot list resource "configmaps" in API group "" in the namespace "monitoring": no relationship found between node 'pi4blue' and this object
E1208 20:00:49.892179     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-prometheus: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.892405     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-prometheus podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.39232017 +0000 UTC m=+37.749772068 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-prometheus\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-prometheus\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.892665     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-prometheus-dashboard: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.892953     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-prometheus-dashboard podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.392776457 +0000 UTC m=+37.750228336 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-prometheus-dashboard\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-prometheus-dashboard\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.893045     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-proxy: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.893283     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-proxy podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.393216651 +0000 UTC m=+37.750668549 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-proxy\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-proxy\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.893453     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-traefik-dashboard: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.893897     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-traefik-dashboard podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.393594792 +0000 UTC m=+37.751046690 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-traefik-dashboard\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-traefik-dashboard\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.893989     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-workload-total: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.894227     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-workload-total podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.394154854 +0000 UTC m=+37.751606937 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-workload-total\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-workload-total\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.894435     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboards: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.894715     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboards podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.394646696 +0000 UTC m=+37.752098575 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboards\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboards\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.895178     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-prometheus-remote-write: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.895723     383 secret.go:195] Couldn't get secret monitoring/node-exporter-token-svqfn: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.895878     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-persistentvolumesusage: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.895980     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-namespace-by-workload: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.896074     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-statefulset: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.896179     383 secret.go:195] Couldn't get secret monitoring/prometheus-k8s: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.896268     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-apiserver: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.896361     383 configmap.go:200] Couldn't get configMap monitoring/prometheus-k8s-rulefiles-0: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.896526     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-controller-manager: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.896648     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-k8s-resources-pod: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.896767     383 secret.go:195] Couldn't get secret monitoring/prometheus-k8s-token-zhgc4: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.896879     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-cluster-total: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.897108     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-scheduler: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.897266     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-k8s-resources-cluster: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.897378     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-node-rsrc-use: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.897480     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-k8s-resources-workloads-namespace: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.897588     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-kubelet: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.897885     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-nodes: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.898296     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-kubernetes-cluster-dashboard: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.898564     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-k8s-resources-node: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.898918     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-k8s-resources-workload: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.899147     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-coredns-dashboard: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.899360     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-node-cluster-rsrc-use: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.899666     383 secret.go:195] Couldn't get secret monitoring/grafana-config: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.899923     383 secret.go:195] Couldn't get secret monitoring/prometheus-adapter-token-v6b7x: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.900187     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-k8s-resources-namespace: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.901197     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-pod-total: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.902334     383 configmap.go:200] Couldn't get configMap monitoring/adapter-config: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.902669     383 configmap.go:200] Couldn't get configMap monitoring/grafana-dashboard-namespace-by-pod: failed to sync configmap cache: timed out waiting for the condition
E1208 20:00:49.903017     383 secret.go:195] Couldn't get secret monitoring/grafana-token-h4pjq: failed to sync secret cache: timed out waiting for the condition
time="2020-12-08T20:00:49.903237650Z" level=info msg="Cluster-Http-Server 2020/12/08 20:00:49 http: TLS handshake error from 192.168.1.143:56506: remote error: tls: bad certificate"
E1208 20:00:49.903574     383 secret.go:195] Couldn't get secret monitoring/prometheus-k8s-tls-assets: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.904346     383 secret.go:195] Couldn't get secret monitoring/grafana-datasources: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.904790     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-prometheus-remote-write podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.395410291 +0000 UTC m=+37.752862208 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-prometheus-remote-write\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-prometheus-remote-write\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.905043     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/f6c8a21f-6d4f-4b51-987b-9e3a5176e07e-node-exporter-token-svqfn podName:f6c8a21f-6d4f-4b51-987b-9e3a5176e07e nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.404912449 +0000 UTC m=+37.762364347 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"node-exporter-token-svqfn\" (UniqueName: \"kubernetes.io/secret/f6c8a21f-6d4f-4b51-987b-9e3a5176e07e-node-exporter-token-svqfn\") pod \"node-exporter-xw9l8\" (UID: \"f6c8a21f-6d4f-4b51-987b-9e3a5176e07e\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.905252     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-persistentvolumesusage podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.405088723 +0000 UTC m=+37.762540584 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-persistentvolumesusage\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-persistentvolumesusage\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.905360     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-namespace-by-workload podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.405303015 +0000 UTC m=+37.762754931 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-namespace-by-workload\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-namespace-by-workload\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.905582     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-statefulset podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.405472715 +0000 UTC m=+37.762924613 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-statefulset\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-statefulset\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.905816     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-config podName:259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.405753154 +0000 UTC m=+37.763205052 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config\" (UniqueName: \"kubernetes.io/secret/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-config\") pod \"prometheus-k8s-0\" (UID: \"259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.905992     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-apiserver podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.405930187 +0000 UTC m=+37.763382159 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-apiserver\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-apiserver\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.906100     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-prometheus-k8s-rulefiles-0 podName:259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.4060175 +0000 UTC m=+37.763469380 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.906386     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-controller-manager podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.406254088 +0000 UTC m=+37.763705986 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-controller-manager\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-controller-manager\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.906581     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-pod podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.40642314 +0000 UTC m=+37.763875001 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-k8s-resources-pod\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-pod\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.906675     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-prometheus-k8s-token-zhgc4 podName:259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.406620099 +0000 UTC m=+37.764072015 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"prometheus-k8s-token-zhgc4\" (UniqueName: \"kubernetes.io/secret/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-prometheus-k8s-token-zhgc4\") pod \"prometheus-k8s-0\" (UID: \"259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.906748     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-cluster-total podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.406698857 +0000 UTC m=+37.764150736 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-cluster-total\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-cluster-total\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.907016     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-scheduler podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.40688989 +0000 UTC m=+37.764341788 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-scheduler\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-scheduler\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.907109     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-cluster podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.407052238 +0000 UTC m=+37.764504099 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-k8s-resources-cluster\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-cluster\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.907183     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-node-rsrc-use podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.407133088 +0000 UTC m=+37.764584968 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-node-rsrc-use\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-node-rsrc-use\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.913269     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-workloads-namespace podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.407207457 +0000 UTC m=+37.764659337 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-k8s-resources-workloads-namespace\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-workloads-namespace\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.913457     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-kubelet podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.413338017 +0000 UTC m=+37.770789896 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-kubelet\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-kubelet\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.913727     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-nodes podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.413496625 +0000 UTC m=+37.770948523 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-nodes\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-nodes\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.913978     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-kubernetes-cluster-dashboard podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.41378373 +0000 UTC m=+37.771235609 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-kubernetes-cluster-dashboard\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-kubernetes-cluster-dashboard\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.914097     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-node podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.414035669 +0000 UTC m=+37.771487549 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-k8s-resources-node\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-node\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.914249     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-workload podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.414191425 +0000 UTC m=+37.771643323 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-k8s-resources-workload\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-workload\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.914361     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-coredns-dashboard podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.414278072 +0000 UTC m=+37.771729951 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-coredns-dashboard\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-coredns-dashboard\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.914572     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-node-cluster-rsrc-use podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.414469716 +0000 UTC m=+37.771921632 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-node-cluster-rsrc-use\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-node-cluster-rsrc-use\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.914718     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-config podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.414615157 +0000 UTC m=+37.772067037 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-config\" (UniqueName: \"kubernetes.io/secret/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-config\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.914972     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/57c0e206-b7b6-4adc-a553-3bdf14888b23-prometheus-adapter-token-v6b7x podName:57c0e206-b7b6-4adc-a553-3bdf14888b23 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.414883689 +0000 UTC m=+37.772335716 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"prometheus-adapter-token-v6b7x\" (UniqueName: \"kubernetes.io/secret/57c0e206-b7b6-4adc-a553-3bdf14888b23-prometheus-adapter-token-v6b7x\") pod \"prometheus-adapter-585b57857b-s4t9n\" (UID: \"57c0e206-b7b6-4adc-a553-3bdf14888b23\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.915108     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-namespace podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.415007853 +0000 UTC m=+37.772459714 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-k8s-resources-namespace\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-k8s-resources-namespace\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.915194     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-pod-total podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.415141813 +0000 UTC m=+37.772593674 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-pod-total\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-pod-total\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.915273     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/57c0e206-b7b6-4adc-a553-3bdf14888b23-config podName:57c0e206-b7b6-4adc-a553-3bdf14888b23 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.415222571 +0000 UTC m=+37.772674432 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c0e206-b7b6-4adc-a553-3bdf14888b23-config\") pod \"prometheus-adapter-585b57857b-s4t9n\" (UID: \"57c0e206-b7b6-4adc-a553-3bdf14888b23\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.915383     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-namespace-by-pod podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.415302439 +0000 UTC m=+37.772754300 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-dashboard-namespace-by-pod\" (UniqueName: \"kubernetes.io/configmap/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-dashboard-namespace-by-pod\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync configmap cache: timed out waiting for the condition"
E1208 20:00:49.915490     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-token-h4pjq podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.415436677 +0000 UTC m=+37.772888557 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-token-h4pjq\" (UniqueName: \"kubernetes.io/secret/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-token-h4pjq\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.915566     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-tls-assets podName:259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.415516157 +0000 UTC m=+37.772968018 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"tls-assets\" (UniqueName: \"kubernetes.io/secret/259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.915642     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-datasources podName:b7d55518-1bbc-44aa-809b-68614c48868a nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.415593415 +0000 UTC m=+37.773045294 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"grafana-datasources\" (UniqueName: \"kubernetes.io/secret/b7d55518-1bbc-44aa-809b-68614c48868a-grafana-datasources\") pod \"grafana-7cccfc9b5f-l2crq\" (UID: \"b7d55518-1bbc-44aa-809b-68614c48868a\") : failed to sync secret cache: timed out waiting for the condition"
time="2020-12-08T20:00:49.919203360Z" level=info msg="Cluster-Http-Server 2020/12/08 20:00:49 http: TLS handshake error from 192.168.1.143:56510: remote error: tls: bad certificate"
E1208 20:00:49.992762     383 secret.go:195] Couldn't get secret monitoring/kube-state-metrics-token-9fhbs: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.993029     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/072a9c2f-9d13-47c1-8d77-d2fe3544912c-kube-state-metrics-token-9fhbs podName:072a9c2f-9d13-47c1-8d77-d2fe3544912c nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.492953756 +0000 UTC m=+37.850405654 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-state-metrics-token-9fhbs\" (UniqueName: \"kubernetes.io/secret/072a9c2f-9d13-47c1-8d77-d2fe3544912c-kube-state-metrics-token-9fhbs\") pod \"kube-state-metrics-6cb6df5d4-mlcng\" (UID: \"072a9c2f-9d13-47c1-8d77-d2fe3544912c\") : failed to sync secret cache: timed out waiting for the condition"
E1208 20:00:49.993144     383 secret.go:195] Couldn't get secret monitoring/prometheus-operator-token-rqxtx: failed to sync secret cache: timed out waiting for the condition
E1208 20:00:49.993284     383 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/e27ac682-b971-4edd-a9c0-81554fb77008-prometheus-operator-token-rqxtx podName:e27ac682-b971-4edd-a9c0-81554fb77008 nodeName:}" failed. No retries permitted until 2020-12-08 20:00:50.493227639 +0000 UTC m=+37.850679556 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"prometheus-operator-token-rqxtx\" (UniqueName: \"kubernetes.io/secret/e27ac682-b971-4edd-a9c0-81554fb77008-prometheus-operator-token-rqxtx\") pod \"prometheus-operator-67755f959-vvmqz\" (UID: \"e27ac682-b971-4edd-a9c0-81554fb77008\") : failed to sync secret cache: timed out waiting for the condition"


Worker Node

Dec 08 20:36:04 pi4red k3s[375]: time="2020-12-08T20:36:04.138859449Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Dec 08 20:36:04 pi4red k3s[375]: time="2020-12-08T20:36:04.139029743Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Dec 08 20:36:05 pi4red k3s[375]: time="2020-12-08T20:36:05.173287900Z" level=info msg="Waiting for containerd startup: rpc error: code = Unknown desc = server is not initialized yet"
Dec 08 20:36:06 pi4red k3s[375]: time="2020-12-08T20:36:06.176968621Z" level=info msg="Containerd is now running"
Dec 08 20:36:06 pi4red k3s[375]: time="2020-12-08T20:36:06.871289414Z" level=info msg="Connecting to proxy" url="wss://192.168.1.142:6443/v1-k3s/connect"
Dec 08 20:36:06 pi4red k3s[375]: time="2020-12-08T20:36:06.968930750Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
Dec 08 20:36:06 pi4red k3s[375]: time="2020-12-08T20:36:06.969678663Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/b46300d70fe21c458e9a951f12a5c6dd86eb7cf2d0b213bb9ad07dbad435207e/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=pi4red --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/system.slice --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/system.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Dec 08 20:36:06 pi4red k3s[375]: time="2020-12-08T20:36:06.978223442Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=pi4red --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Dec 08 20:36:06 pi4red k3s[375]: W1208 20:36:06.980347     375 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Dec 08 20:36:06 pi4red k3s[375]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Dec 08 20:36:06 pi4red k3s[375]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.102017     375 server.go:407] Version: v1.19.4+k3s1
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.284870     375 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.505501     375 node.go:136] Successfully retrieved node IP: 192.168.1.143
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.505595     375 server_others.go:112] kube-proxy node IP is an IPv4 address (192.168.1.143), assume IPv4 operation
Dec 08 20:36:07 pi4red k3s[375]: time="2020-12-08T20:36:07.557784695Z" level=info msg="Node CIDR assigned for: pi4red"
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.558057     375 flannel.go:92] Determining IP address of default interface
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.579440     375 flannel.go:105] Using interface with name eth0 and address 192.168.1.143
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.591151     375 kube.go:300] Starting kube subnet manager
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.595158     375 kube.go:117] Waiting 10m0s for node controller to sync
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.604475     375 nvidia.go:61] NVIDIA GPU metrics will not be available: no NVIDIA devices found
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.608569     375 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.609162     375 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.609269     375 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.609334     375 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.609394     375 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.609816     375 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.609866     375 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.609908     375 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.609949     375 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Dec 08 20:36:07 pi4red k3s[375]: E1208 20:36:07.609967     375 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.610201     375 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.610244     375 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.610283     375 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.610322     375 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Dec 08 20:36:07 pi4red k3s[375]: E1208 20:36:07.610340     375 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.615499     375 server_others.go:187] Using iptables Proxier.
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.619162     375 server.go:650] Version: v1.19.4+k3s1
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.621785     375 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.621901     375 conntrack.go:52] Setting nf_conntrack_max to 131072
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.683042     375 conntrack.go:83] Setting conntrack hashsize to 32768
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.698087     375 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.699137     375 container_manager_linux.go:289] container manager verified user specified cgroup-root exists: []
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.699221     375 container_manager_linux.go:294] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.702597     375 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.702978     375 container_manager_linux.go:324] [topologymanager] Initializing Topology Manager with none policy
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.703004     375 container_manager_linux.go:329] Creating device plugin manager: true
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.703614     375 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.703992     375 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.707517     375 kubelet.go:261] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.707641     375 kubelet.go:273] Watching apiserver
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.733106     375 kuberuntime_manager.go:214] Container runtime containerd initialized, version: v1.4.1-k3s1, apiVersion: v1alpha2
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.740833     375 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.740985     375 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.741572     375 server.go:1148] Started kubelet
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.742006     375 config.go:315] Starting service config controller
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.742110     375 shared_informer.go:240] Waiting for caches to sync for service config
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.742290     375 config.go:224] Starting endpoint slice config controller
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.742318     375 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.744989     375 server.go:152] Starting to listen on 0.0.0.0:10250
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.747808     375 server.go:424] Adding debug handlers to kubelet server.
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.754141     375 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.757842     375 volume_manager.go:265] Starting Kubelet Volume Manager
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.759405     375 desired_state_of_world_populator.go:139] Desired state populator starts to run
Dec 08 20:36:07 pi4red k3s[375]: E1208 20:36:07.771194     375 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Dec 08 20:36:07 pi4red k3s[375]: E1208 20:36:07.771304     375 kubelet.go:1218] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Dec 08 20:36:07 pi4red k3s[375]: time="2020-12-08T20:36:07.821984715Z" level=info msg="labels have already set on node: pi4red"
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.843343     375 shared_informer.go:247] Caches are synced for endpoint slice config
Dec 08 20:36:07 pi4red k3s[375]: W1208 20:36:07.860330     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/\""
Dec 08 20:36:07 pi4red k3s[375]: E1208 20:36:07.880258     375 kubelet.go:2183] node "pi4red" not found
Dec 08 20:36:07 pi4red k3s[375]: I1208 20:36:07.954720     375 kubelet_node_status.go:70] Attempting to register node pi4red
Dec 08 20:36:07 pi4red k3s[375]: E1208 20:36:07.982152     375 kubelet.go:2183] node "pi4red" not found
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.042951     375 shared_informer.go:247] Caches are synced for service config
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.050267     375 status_manager.go:158] Starting to sync pod status with apiserver
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.050403     375 kubelet.go:1741] Starting kubelet main sync loop.
Dec 08 20:36:08 pi4red k3s[375]: E1208 20:36:08.050613     375 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Dec 08 20:36:08 pi4red k3s[375]: E1208 20:36:08.083022     375 kubelet.go:2183] node "pi4red" not found
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.118904     375 cpu_manager.go:184] [cpumanager] starting with none policy
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.118951     375 cpu_manager.go:185] [cpumanager] reconciling every 10s
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.119362     375 state_mem.go:36] [cpumanager] initializing new in-memory state store
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.121288     375 state_mem.go:88] [cpumanager] updated default cpuset: ""
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.121332     375 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.121369     375 policy_none.go:43] [cpumanager] none policy: Start
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.127650     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods\""
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.134094     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort\""
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.143237     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable\""
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.149069     375 plugin_manager.go:114] Starting Kubelet Plugin Manager
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.152435     375 pod_container_deletor.go:79] Container "ceca318cdca46d1f6138de9fcc14bf478fa8a67ab71feff759d7695cecbde1f8" not found in pod's containers
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.152675     375 pod_container_deletor.go:79] Container "070f0833c5fd36abb62f3c0dad2b0293c49161906cb5c474c7e0444f09310932" not found in pod's containers
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.153739     375 topology_manager.go:233] [topologymanager] Topology Admit Handler
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.155286     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/systemd/system.slice\""
Dec 08 20:36:08 pi4red k3s[375]: E1208 20:36:08.155896     375 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "pi4red" not found
Dec 08 20:36:08 pi4red k3s[375]: E1208 20:36:08.184641     375 kubelet.go:2183] node "pi4red" not found
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.272274     375 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-v9pqp" (UniqueName: "kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp") pod "svclb-traefik-qql2l" (UID: "16010812-bd67-472e-a354-d88a3c531c70")
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.276514     375 topology_manager.go:233] [topologymanager] Topology Admit Handler
Dec 08 20:36:08 pi4red k3s[375]: E1208 20:36:08.285737     375 kubelet.go:2183] node "pi4red" not found
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.309865     375 topology_manager.go:233] [topologymanager] Topology Admit Handler
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.315655     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998\""
Dec 08 20:36:08 pi4red k3s[375]: E1208 20:36:08.321321     375 cgroup_manager_linux.go:698] cgroup update failed failed to set supported cgroup subsystems for cgroup [kubepods burstable podbd1e51a9-a315-4d53-8a45-a86442bf0998]: failed to set config for supported subsystems : failed to write "100000" to "/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998/cpu.cfs_period_us": open /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998/cpu.cfs_period_us: permission denied
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.332473     375 topology_manager.go:233] [topologymanager] Topology Admit Handler
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.337716     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34\""
Dec 08 20:36:08 pi4red k3s[375]: E1208 20:36:08.342330     375 cgroup_manager_linux.go:698] cgroup update failed failed to set supported cgroup subsystems for cgroup [kubepods burstable pod726c0300-292b-4b66-81a2-c370e0360f34]: failed to set config for supported subsystems : failed to write "100000" to "/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/cpu.cfs_period_us": open /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/cpu.cfs_period_us: permission denied
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.352601     375 pod_container_deletor.go:79] Container "4494396bc4faa50c52e909b34c996f259ccb18f18c65cf42595748bf69ea8f3d" not found in pod's containers
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.353172     375 pod_container_deletor.go:79] Container "823c61076ea2a17b296acbf64b34e74ef8414681bf4075df4bc2eb246bfc72e0" not found in pod's containers
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.358587     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod0313bc4a-38b9-42cd-9a95-95ce0698787e\""
Dec 08 20:36:08 pi4red k3s[375]: W1208 20:36:08.362540     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod16010812-bd67-472e-a354-d88a3c531c70\""
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.372670     375 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "proc" (UniqueName: "kubernetes.io/host-path/726c0300-292b-4b66-81a2-c370e0360f34-proc") pod "node-exporter-hcwdr" (UID: "726c0300-292b-4b66-81a2-c370e0360f34")
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.373003     375 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sys" (UniqueName: "kubernetes.io/host-path/726c0300-292b-4b66-81a2-c370e0360f34-sys") pod "node-exporter-hcwdr" (UID: "726c0300-292b-4b66-81a2-c370e0360f34")
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.373581     375 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "root" (UniqueName: "kubernetes.io/host-path/726c0300-292b-4b66-81a2-c370e0360f34-root") pod "node-exporter-hcwdr" (UID: "726c0300-292b-4b66-81a2-c370e0360f34")
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.373798     375 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "alertmanager-main-db" (UniqueName: "kubernetes.io/empty-dir/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-db") pod "alertmanager-main-0" (UID: "0313bc4a-38b9-42cd-9a95-95ce0698787e")
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.373917     375 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "alertmanager-main-token-z4b7q" (UniqueName: "kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q") pod "alertmanager-main-0" (UID: "0313bc4a-38b9-42cd-9a95-95ce0698787e")
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.374545     375 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "arm-exporter-token-9txj8" (UniqueName: "kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8") pod "arm-exporter-k5hxx" (UID: "bd1e51a9-a315-4d53-8a45-a86442bf0998")
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.374903     375 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "node-exporter-token-svqfn" (UniqueName: "kubernetes.io/secret/726c0300-292b-4b66-81a2-c370e0360f34-node-exporter-token-svqfn") pod "node-exporter-hcwdr" (UID: "726c0300-292b-4b66-81a2-c370e0360f34")
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.375096     375 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume") pod "alertmanager-main-0" (UID: "0313bc4a-38b9-42cd-9a95-95ce0698787e")
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.375154     375 reconciler.go:157] Reconciler: start to sync state
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.388428     375 kuberuntime_manager.go:992] updating runtime config through cri with podcidr 10.42.2.0/24
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.389776     375 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.2.0/24
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.595402     375 kube.go:124] Node controller sync successful
Dec 08 20:36:08 pi4red k3s[375]: I1208 20:36:08.596161     375 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.266839     375 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.272486     375 flannel.go:82] Running backend.
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.272894     375 vxlan_network.go:60] watching for new subnet leases
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.324369     375 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.327999     375 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.346770     375 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.347301     375 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.373861     375 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.378933     375 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.388971     375 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.399285     375 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.2.0/24 -j RETURN
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.419798     375 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.429457     375 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.454914     375 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.461116     375 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.470675     375 kubelet_node_status.go:108] Node pi4red was previously registered
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.471222     375 kubelet_node_status.go:73] Successfully registered node pi4red
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.496839     375 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.2.0/24 -j RETURN
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.541934     375 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Dec 08 20:36:09 pi4red k3s[375]: I1208 20:36:09.838807     375 network_policy_controller.go:149] Starting network policy controller
Dec 08 20:36:15 pi4red k3s[375]: W1208 20:36:15.513657     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/db94edbfe7be05d87a44f91c20ec3e5e3d73f8081b22ea509471cc3a4931904d\""
Dec 08 20:36:15 pi4red k3s[375]: W1208 20:36:15.523548     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod0313bc4a-38b9-42cd-9a95-95ce0698787e/3ed06b358ffe089da926082759deb1841ccdd2147dd9c8d5f0e7a90c4b3add1c\""
Dec 08 20:36:15 pi4red k3s[375]: W1208 20:36:15.633070     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998/b15ea67793af4ae4ebd8e5200bacf5c21ba9ccc0932b49ab4b513b749afa3dde\""
Dec 08 20:36:15 pi4red k3s[375]: W1208 20:36:15.639752     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod16010812-bd67-472e-a354-d88a3c531c70/57c8e1b5385ed7e552c11f9267aff521a91320b050c8ea6328dbfb8dd39144cd\""
Dec 08 20:36:16 pi4red k3s[375]: W1208 20:36:16.466771     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/09784ab5427dd99a1e827bd6aff431b3da59dc8e7d11080981267a3649d1fca6\""
Dec 08 20:36:16 pi4red k3s[375]: W1208 20:36:16.574314     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod0313bc4a-38b9-42cd-9a95-95ce0698787e/39837e25688db07515c65cabfd9cf66eb2678aa42b526ae8f82f7dea635b5588\""
Dec 08 20:36:16 pi4red k3s[375]: W1208 20:36:16.640226     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod16010812-bd67-472e-a354-d88a3c531c70/b13c4877620d2adefb6fe96e64a3c2360235333022c453a27aa3cd48d2395a53\""
Dec 08 20:36:18 pi4red k3s[375]: W1208 20:36:18.375798     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/41d233e2fcbfe5af8e13e116c40d42dc918fb0a69d518a472af056a26a9670c2\""
Dec 08 20:36:18 pi4red k3s[375]: W1208 20:36:18.484752     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod16010812-bd67-472e-a354-d88a3c531c70/82b1732ac33c502ff5f2df82279f89949019c4f2a80449e0d3301a9d7411593a\""
Dec 08 20:36:19 pi4red k3s[375]: W1208 20:36:19.176491     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998/47ff4db55d61e688b3d18570a4a243cec2fae3a9b37f8b59d126c3a09dbc21b4\""
Dec 08 20:36:20 pi4red k3s[375]: W1208 20:36:20.018732     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998/dacbad1b2029de0bb6a35ea18dac3c122a3c0be8bcc6d51b95f538fe5df0f0f8\""
Dec 08 20:36:26 pi4red k3s[375]: W1208 20:36:26.695750     375 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod0313bc4a-38b9-42cd-9a95-95ce0698787e/34736b9cb58eccfcc8ff9d4923a11ef30df5c634a5482851be443b74fd714866\""

Those are all the correct logs, but I don't see any errors - probably because all the pods were already running. Pods don't get restarted when k3s is restarted. You might try a full reboot and then collect logs after?

I have just turned on again the cluster after all night off and everything is going down now. These are the logs. I can ssh into both nodes but I cannot ping them and I cannot get any response from kubectl. The SSH also breaks down after a while with broken pipe... I have tried rebooting twice both Pi's but it didn't change anything. No clue what's going on


Master

Dec 09 09:31:24 pi4blue k3s[356]: Trace[1030373217]: ---"Objects listed" 16425ms (09:31:00.864)
Dec 09 09:31:24 pi4blue k3s[356]: Trace[1030373217]: [16.425403994s] [16.425403994s] END
Dec 09 09:31:24 pi4blue k3s[356]: I1209 09:31:24.917355     356 trace.go:205] Trace[1737362357]: "Reflector ListAndWatch" name:k8s.io/client-go/metadata/metadatainformer/informer.go:90 (09-Dec-2020 09:03:11.616) (total time: 16476ms):
Dec 09 09:31:24 pi4blue k3s[356]: Trace[1737362357]: ---"Objects listed" 16476ms (09:31:00.917)
Dec 09 09:31:24 pi4blue k3s[356]: Trace[1737362357]: [16.476969101s] [16.476969101s] END
Dec 09 09:31:24 pi4blue k3s[356]: I1209 09:31:24.918704     356 trace.go:205] Trace[1341473816]: "List etcd3" key:/monitoring.coreos.com/servicemonitors,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (09-Dec-2020 09:31:24.062) (total time: 849ms):
Dec 09 09:31:24 pi4blue k3s[356]: Trace[1341473816]: [849.068048ms] [849.068048ms] END
Dec 09 09:31:25 pi4blue k3s[356]: W1209 09:31:25.101927     356 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod072a9c2f-9d13-47c1-8d77-d2fe3544912c/02742316547daf8ccd5ecaa4c849b7084ac4ae5c117b2fdb9b0c3538842c564d\""
Dec 09 09:31:25 pi4blue k3s[356]: W1209 09:31:25.112217     356 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/podf480099c-44c2-4ada-903c-8b045ea3108e/c3b6d4a508e9e28b2a3b92b0f49bf8efd51dd3b2d69674d58966e13e9f480e2e\""
Dec 09 09:31:25 pi4blue k3s[356]: W1209 09:31:25.140465     356 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podb75f953b-fbde-471b-8a25-1358093ceade/49d2a4ef0e301604897f506c4def650bb33c3656fde1924d5a97d29facdef755\""
Dec 09 09:31:25 pi4blue k3s[356]: E1209 09:31:25.149179     356 customresource_handler.go:668] error building openapi models for prometheuses.monitoring.coreos.com: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.alerting.properties.alertmanagers.items.<array>.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.lifecycle.properties.postStart.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.lifecycle.properties.postStart.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.lifecycle.properties.preStop.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.lifecycle.properties.preStop.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.livenessProbe.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.livenessProbe.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.readinessProbe.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.readinessProbe.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.startupProbe.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.containers.items.<array>.properties.startupProbe.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.lifecycle.properties.postStart.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.lifecycle.properties.postStart.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.lifecycle.properties.preStop.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.lifecycle.properties.preStop.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.livenessProbe.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.livenessProbe.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.readinessProbe.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.readinessProbe.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.startupProbe.properties.httpGet.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: ERROR $root.definitions.com.coreos.monitoring.v1.Prometheus.properties.spec.properties.initContainers.items.<array>.properties.startupProbe.properties.tcpSocket.properties.port has invalid property: anyOf
Dec 09 09:31:25 pi4blue k3s[356]: I1209 09:31:25.176443     356 trace.go:205] Trace[1511897603]: "Reflector ListAndWatch" name:k8s.io/client-go/metadata/metadatainformer/informer.go:90 (09-Dec-2020 09:03:11.617) (total time: 16735ms):
Dec 09 09:31:25 pi4blue k3s[356]: Trace[1511897603]: ---"Objects listed" 16734ms (09:31:00.176)
Dec 09 09:31:25 pi4blue k3s[356]: Trace[1511897603]: [16.735023652s] [16.735023652s] END
Dec 09 09:31:25 pi4blue k3s[356]: I1209 09:31:25.207550     356 shared_informer.go:247] Caches are synced for garbage collector
Dec 09 09:31:25 pi4blue k3s[356]: I1209 09:31:25.207620     356 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Dec 09 09:31:25 pi4blue k3s[356]: I1209 09:31:25.208436     356 shared_informer.go:247] Caches are synced for resource quota
Dec 09 09:31:25 pi4blue k3s[356]: I1209 09:31:25.243528     356 shared_informer.go:247] Caches are synced for resource quota
Dec 09 09:31:25 pi4blue k3s[356]: W1209 09:31:25.250841     356 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod57c0e206-b7b6-4adc-a553-3bdf14888b23/ea8393e1c41ff48db4785678cf31d3893f3cbd4b4103ab5683cd0c84c0ea5f72\""
Dec 09 09:31:25 pi4blue k3s[356]: I1209 09:31:25.265223     356 shared_informer.go:247] Caches are synced for garbage collector
Dec 09 09:31:25 pi4blue k3s[356]: E1209 09:31:25.424044     356 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Dec 09 09:31:25 pi4blue k3s[356]: E1209 09:31:25.879177     356 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Dec 09 09:31:26 pi4blue k3s[356]: I1209 09:31:26.050992     356 trace.go:205] Trace[1908245581]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (09-Dec-2020 09:03:19.272) (total time: 9954ms):
Dec 09 09:31:26 pi4blue k3s[356]: Trace[1908245581]: ---"Transaction committed" 6684ms (09:03:00.961)
Dec 09 09:31:26 pi4blue k3s[356]: Trace[1908245581]: ---"Transaction committed" 3261ms (09:31:00.050)
Dec 09 09:31:26 pi4blue k3s[356]: Trace[1908245581]: [9.954263193s] [9.954263193s] END
Dec 09 09:31:26 pi4blue k3s[356]: I1209 09:31:26.052611     356 trace.go:205] Trace[1059017818]: "Update" url:/apis/apps/v1/namespaces/monitoring/replicasets/prometheus-adapter-585b57857b/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.268) (total time: 9960ms):
Dec 09 09:31:26 pi4blue k3s[356]: Trace[1059017818]: ---"Object stored in database" 9955ms (09:31:00.051)
Dec 09 09:31:26 pi4blue k3s[356]: Trace[1059017818]: [9.960066979s] [9.960066979s] END
Dec 09 09:31:27 pi4blue k3s[356]: I1209 09:31:27.439484     356 trace.go:205] Trace[1420133764]: "GuaranteedUpdate etcd3" type:*core.Endpoints (09-Dec-2020 09:03:19.427) (total time: 11187ms):
Dec 09 09:31:27 pi4blue k3s[356]: Trace[1420133764]: ---"Transaction committed" 6714ms (09:03:00.171)
Dec 09 09:31:27 pi4blue k3s[356]: Trace[1420133764]: ---"Transaction committed" 4441ms (09:31:00.439)
Dec 09 09:31:27 pi4blue k3s[356]: Trace[1420133764]: [11.187735131s] [11.187735131s] END
Dec 09 09:31:27 pi4blue k3s[356]: I1209 09:31:27.441725     356 trace.go:205] Trace[1125154002]: "Update" url:/api/v1/namespaces/monitoring/endpoints/node-exporter,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.427) (total time: 11190ms):
Dec 09 09:31:27 pi4blue k3s[356]: Trace[1125154002]: ---"Object stored in database" 11189ms (09:31:00.441)
Dec 09 09:31:27 pi4blue k3s[356]: Trace[1125154002]: [11.19046013s] [11.19046013s] END
Dec 09 09:31:28 pi4blue k3s[356]: time="2020-12-09T09:31:28.317810456Z" level=error msg="error in txn: database is locked"
Dec 09 09:31:28 pi4blue k3s[356]: time="2020-12-09T09:31:28.319450107Z" level=error msg="error in txn: database is locked"
Dec 09 09:31:28 pi4blue k3s[356]: E1209 09:31:28.323287     356 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{Code:2, Message:"database is locked", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
Dec 09 09:31:28 pi4blue k3s[356]: I1209 09:31:28.324486     356 trace.go:205] Trace[307189908]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:node-controller,client:127.0.0.1 (09-Dec-2020 09:03:26.251) (total time: 5249ms):
Dec 09 09:31:28 pi4blue k3s[356]: Trace[307189908]: [5.249384229s] [5.249384229s] END
Dec 09 09:31:28 pi4blue k3s[356]: E1209 09:31:28.331664     356 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{Code:2, Message:"database is locked", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
Dec 09 09:31:28 pi4blue k3s[356]: I1209 09:31:28.332762     356 trace.go:205] Trace[2018265957]: "Create" url:/api/v1/namespaces/monitoring/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:03:26.260) (total time: 5248ms):
Dec 09 09:31:28 pi4blue k3s[356]: Trace[2018265957]: [5.248361102s] [5.248361102s] END
Dec 09 09:31:28 pi4blue k3s[356]: E1209 09:31:28.334368     356 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pi4blue.164f00e8f4d96890", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pi4blue", UID:"78b8db2c-3d1c-43e1-b611-94700e834887", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"RegisteredNode", Message:"Node pi4blue event: Registered Node pi4blue in Controller", Source:v1.EventSource{Component:"node-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfec42941fe14890, ext:58071311132, loc:(*time.Location)(0x57bdca8)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfec42941fe14890, ext:58071311132, loc:(*time.Location)(0x57bdca8)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = database is locked' (will not retry!)
Dec 09 09:31:28 pi4blue k3s[356]: time="2020-12-09T09:31:28.337182125Z" level=error msg="error in txn: database is locked"
Dec 09 09:31:28 pi4blue k3s[356]: I1209 09:31:28.337729     356 trace.go:205] Trace[1585968595]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (09-Dec-2020 09:03:09.059) (total time: 22454ms):
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1585968595]: ---"Transaction committed" 8526ms (09:03:00.593)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1585968595]: ---"Transaction prepared" 81ms (09:03:00.674)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1585968595]: ---"Transaction committed" 8317ms (09:03:00.992)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1585968595]: ---"Transaction prepared" 169ms (09:03:00.161)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1585968595]: ---"Transaction committed" 259ms (09:03:00.421)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1585968595]: [22.454522924s] [22.454522924s] END
Dec 09 09:31:28 pi4blue k3s[356]: E1209 09:31:28.337797     356 controller.go:227] unable to sync kubernetes service: rpc error: code = Unknown desc = database is locked
Dec 09 09:31:28 pi4blue k3s[356]: I1209 09:31:28.695717     356 trace.go:205] Trace[474524047]: "Get" url:/apis/apps/v1/namespaces/kube-system/deployments/traefik,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:generic-garbage-collector,client:127.0.0.1 (09-Dec-2020 09:31:25.887) (total time: 2807ms):
Dec 09 09:31:28 pi4blue k3s[356]: Trace[474524047]: ---"About to write a response" 2807ms (09:31:00.695)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[474524047]: [2.807810565s] [2.807810565s] END
Dec 09 09:31:28 pi4blue k3s[356]: I1209 09:31:28.704087     356 trace.go:205] Trace[1876346976]: "Get" url:/apis/apps/v1/namespaces/kube-system/deployments/local-path-provisioner,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:generic-garbage-collector,client:127.0.0.1 (09-Dec-2020 09:31:25.888) (total time: 2815ms):
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1876346976]: ---"About to write a response" 2815ms (09:31:00.703)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1876346976]: [2.815195429s] [2.815195429s] END
Dec 09 09:31:28 pi4blue k3s[356]: I1209 09:31:28.705997     356 trace.go:205] Trace[1151006865]: "Get" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:generic-garbage-collector,client:127.0.0.1 (09-Dec-2020 09:31:25.888) (total time: 2817ms):
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1151006865]: ---"About to write a response" 2814ms (09:31:00.703)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[1151006865]: [2.817690692s] [2.817690692s] END
Dec 09 09:31:28 pi4blue k3s[356]: I1209 09:31:28.722440     356 trace.go:205] Trace[106704814]: "Get" url:/apis/apps/v1/namespaces/monitoring/deployments/kube-state-metrics,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:generic-garbage-collector,client:127.0.0.1 (09-Dec-2020 09:31:25.888) (total time: 2833ms):
Dec 09 09:31:28 pi4blue k3s[356]: Trace[106704814]: ---"About to write a response" 2831ms (09:31:00.719)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[106704814]: [2.833826891s] [2.833826891s] END
Dec 09 09:31:28 pi4blue k3s[356]: I1209 09:31:28.723465     356 trace.go:205] Trace[874798858]: "Get" url:/apis/apps/v1/namespaces/monitoring/deployments/grafana,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:generic-garbage-collector,client:127.0.0.1 (09-Dec-2020 09:31:25.888) (total time: 2834ms):
Dec 09 09:31:28 pi4blue k3s[356]: Trace[874798858]: ---"About to write a response" 2832ms (09:31:00.720)
Dec 09 09:31:28 pi4blue k3s[356]: Trace[874798858]: [2.834557765s] [2.834557765s] END
Dec 09 09:31:28 pi4blue k3s[356]: W1209 09:31:28.992999     356 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77/fe392eb88387aa27a635072ee0e7253137c7d4e0b009fe243ca0752b7a05375e WatchSource:0}: task fe392eb88387aa27a635072ee0e7253137c7d4e0b009fe243ca0752b7a05375e not found: not found
Dec 09 09:31:31 pi4blue k3s[356]: I1209 09:31:31.779386     356 trace.go:205] Trace[2072701279]: "GuaranteedUpdate etcd3" type:*coordination.Lease (09-Dec-2020 09:03:26.196) (total time: 8758ms):
Dec 09 09:31:31 pi4blue k3s[356]: Trace[2072701279]: ---"Transaction committed" 5271ms (09:31:00.292)
Dec 09 09:31:31 pi4blue k3s[356]: Trace[2072701279]: [8.758755295s] [8.758755295s] END
Dec 09 09:31:31 pi4blue k3s[356]: E1209 09:31:31.779499     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:31 pi4blue k3s[356]: I1209 09:31:31.780232     356 trace.go:205] Trace[1190306609]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4red,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:03:26.192) (total time: 8764ms):
Dec 09 09:31:31 pi4blue k3s[356]: Trace[1190306609]: [8.764118868s] [8.764118868s] END
Dec 09 09:31:31 pi4blue k3s[356]: I1209 09:31:31.794570     356 trace.go:205] Trace[100242808]: "GuaranteedUpdate etcd3" type:*core.Pod (09-Dec-2020 09:03:26.462) (total time: 8508ms):
Dec 09 09:31:31 pi4blue k3s[356]: Trace[100242808]: ---"Transaction committed" 5093ms (09:31:00.390)
Dec 09 09:31:31 pi4blue k3s[356]: Trace[100242808]: [8.508026906s] [8.508026906s] END
Dec 09 09:31:31 pi4blue k3s[356]: E1209 09:31:31.794682     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:31 pi4blue k3s[356]: I1209 09:31:31.795652     356 trace.go:205] Trace[2126013010]: "Patch" url:/api/v1/namespaces/monitoring/pods/arm-exporter-k5hxx/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:03:26.461) (total time: 8510ms):
Dec 09 09:31:31 pi4blue k3s[356]: Trace[2126013010]: ---"About to apply patch" 5094ms (09:31:00.390)
Dec 09 09:31:31 pi4blue k3s[356]: Trace[2126013010]: [8.510177306s] [8.510177306s] END
Dec 09 09:31:31 pi4blue k3s[356]: I1209 09:31:31.796717     356 trace.go:205] Trace[382456645]: "GuaranteedUpdate etcd3" type:*core.Event (09-Dec-2020 09:31:28.350) (total time: 3446ms):
Dec 09 09:31:31 pi4blue k3s[356]: Trace[382456645]: [3.446313273s] [3.446313273s] END
Dec 09 09:31:31 pi4blue k3s[356]: E1209 09:31:31.796828     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:31 pi4blue k3s[356]: I1209 09:31:31.797912     356 trace.go:205] Trace[626133793]: "Patch" url:/api/v1/namespaces/monitoring/events/arm-exporter-k5hxx.164f00df0e1f9093,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:31:28.349) (total time: 3447ms):
Dec 09 09:31:31 pi4blue k3s[356]: Trace[626133793]: [3.447883333s] [3.447883333s] END
Dec 09 09:31:33 pi4blue k3s[356]: time="2020-12-09T09:31:33.429012725Z" level=error msg="error in txn: database is locked"
Dec 09 09:31:33 pi4blue k3s[356]: I1209 09:31:33.429480     356 trace.go:205] Trace[1912586473]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (09-Dec-2020 09:31:28.396) (total time: 5033ms):
Dec 09 09:31:33 pi4blue k3s[356]: Trace[1912586473]: [5.033166492s] [5.033166492s] END
Dec 09 09:31:33 pi4blue k3s[356]: E1209 09:31:33.429568     356 controller.go:227] unable to sync kubernetes service: rpc error: code = Unknown desc = database is locked
Dec 09 09:31:33 pi4blue k3s[356]: time="2020-12-09T09:31:33.430284803Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:33 pi4blue k3s[356]: I1209 09:31:33.664333     356 trace.go:205] Trace[1012831482]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (09-Dec-2020 09:03:19.421) (total time: 17418ms):
Dec 09 09:31:33 pi4blue k3s[356]: Trace[1012831482]: ---"Transaction committed" 6802ms (09:03:00.226)
Dec 09 09:31:33 pi4blue k3s[356]: Trace[1012831482]: ---"Transaction committed" 10610ms (09:31:00.664)
Dec 09 09:31:33 pi4blue k3s[356]: Trace[1012831482]: [17.418876702s] [17.418876702s] END
Dec 09 09:31:33 pi4blue k3s[356]: I1209 09:31:33.664718     356 trace.go:205] Trace[1861858544]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/monitoring/endpointslices/alertmanager-main-cwwp4,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.419) (total time: 17421ms):
Dec 09 09:31:33 pi4blue k3s[356]: Trace[1861858544]: ---"Object stored in database" 17419ms (09:31:00.664)
Dec 09 09:31:33 pi4blue k3s[356]: Trace[1861858544]: [17.421048619s] [17.421048619s] END
Dec 09 09:31:33 pi4blue k3s[356]: I1209 09:31:33.794071     356 trace.go:205] Trace[1961904932]: "iptables restore" (09-Dec-2020 09:31:29.762) (total time: 4031ms):
Dec 09 09:31:33 pi4blue k3s[356]: Trace[1961904932]: [4.031301217s] [4.031301217s] END
Dec 09 09:31:33 pi4blue k3s[356]: time="2020-12-09T09:31:33.829358274Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:33 pi4blue k3s[356]: time="2020-12-09T09:31:33.829396920Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:36 pi4blue k3s[356]: E1209 09:31:36.334384     356 manager.go:1123] Failed to create existing container: /kubepods/burstable/pod259ec5f7-b4c1-4e1e-8fc8-5f11c3b74b77/fe392eb88387aa27a635072ee0e7253137c7d4e0b009fe243ca0752b7a05375e: task fe392eb88387aa27a635072ee0e7253137c7d4e0b009fe243ca0752b7a05375e not found: not found
Dec 09 09:31:36 pi4blue k3s[356]: E1209 09:31:36.448406     356 controller.go:178] failed to update node lease, error: Put "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4blue?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Dec 09 09:31:36 pi4blue k3s[356]: I1209 09:31:36.449991     356 trace.go:205] Trace[1108714190]: "GuaranteedUpdate etcd3" type:*coordination.Lease (09-Dec-2020 09:31:26.485) (total time: 9964ms):
Dec 09 09:31:36 pi4blue k3s[356]: Trace[1108714190]: ---"Transaction committed" 5044ms (09:31:00.530)
Dec 09 09:31:36 pi4blue k3s[356]: Trace[1108714190]: [9.964300777s] [9.964300777s] END
Dec 09 09:31:36 pi4blue k3s[356]: E1209 09:31:36.450388     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:36 pi4blue k3s[356]: I1209 09:31:36.452051     356 trace.go:205] Trace[888639748]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4blue,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:31:26.485) (total time: 9966ms):
Dec 09 09:31:36 pi4blue k3s[356]: Trace[888639748]: [9.966641317s] [9.966641317s] END
Dec 09 09:31:36 pi4blue k3s[356]: time="2020-12-09T09:31:36.590974312Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:36 pi4blue k3s[356]: I1209 09:31:36.826529     356 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9190e05cd41984276177d2a7ad358d771b0aa833debb02add9cdb40da4ded91c
Dec 09 09:31:37 pi4blue k3s[356]: I1209 09:31:37.344063     356 trace.go:205] Trace[710817164]: "List etcd3" key:/jobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (09-Dec-2020 09:31:36.016) (total time: 1327ms):
Dec 09 09:31:37 pi4blue k3s[356]: Trace[710817164]: [1.327766526s] [1.327766526s] END
Dec 09 09:31:37 pi4blue k3s[356]: I1209 09:31:37.346201     356 trace.go:205] Trace[342812932]: "List" url:/apis/batch/v1/jobs,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:cronjob-controller,client:127.0.0.1 (09-Dec-2020 09:31:36.016) (total time: 1329ms):
Dec 09 09:31:37 pi4blue k3s[356]: Trace[342812932]: ---"Listing from storage done" 1328ms (09:31:00.344)
Dec 09 09:31:37 pi4blue k3s[356]: Trace[342812932]: [1.329909996s] [1.329909996s] END
Dec 09 09:31:38 pi4blue k3s[356]: W1209 09:31:38.358040     356 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podf6c8a21f-6d4f-4b51-987b-9e3a5176e07e/a66c58836a5a506fd514332da56e2a5d584ae7250a5797598702583ce675c7a8\""
Dec 09 09:31:39 pi4blue k3s[356]: I1209 09:31:39.899745     356 trace.go:205] Trace[389275259]: "GuaranteedUpdate etcd3" type:*apps.Deployment (09-Dec-2020 09:31:24.913) (total time: 14985ms):
Dec 09 09:31:39 pi4blue k3s[356]: Trace[389275259]: ---"Transaction committed" 14981ms (09:31:00.899)
Dec 09 09:31:39 pi4blue k3s[356]: Trace[389275259]: [14.985590774s] [14.985590774s] END
Dec 09 09:31:39 pi4blue k3s[356]: I1209 09:31:39.905882     356 trace.go:205] Trace[152761881]: "Update" url:/apis/apps/v1/namespaces/monitoring/deployments/prometheus-adapter/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (09-Dec-2020 09:31:24.913) (total time: 14991ms):
Dec 09 09:31:39 pi4blue k3s[356]: Trace[152761881]: ---"Object stored in database" 14989ms (09:31:00.902)
Dec 09 09:31:39 pi4blue k3s[356]: Trace[152761881]: [14.991730378s] [14.991730378s] END
Dec 09 09:31:40 pi4blue k3s[356]: W1209 09:31:40.372757     356 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod1e7d62e4-a24f-415b-85ae-0e89586e3fcc/861d3741acce0bb2c6077d55bf79b888bbe58438eca9058a2276a99b3ddd2cd5\""
Dec 09 09:31:41 pi4blue k3s[356]: I1209 09:31:41.800952     356 trace.go:205] Trace[1715402370]: "GuaranteedUpdate etcd3" type:*coordination.Lease (09-Dec-2020 09:31:31.932) (total time: 9868ms):
Dec 09 09:31:41 pi4blue k3s[356]: Trace[1715402370]: ---"Transaction committed" 5067ms (09:31:00.001)
Dec 09 09:31:41 pi4blue k3s[356]: Trace[1715402370]: [9.868462285s] [9.868462285s] END
Dec 09 09:31:41 pi4blue k3s[356]: E1209 09:31:41.801197     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:41 pi4blue k3s[356]: I1209 09:31:41.802855     356 trace.go:205] Trace[682247483]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4red,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:31:31.931) (total time: 9870ms):
Dec 09 09:31:41 pi4blue k3s[356]: Trace[682247483]: [9.870745548s] [9.870745548s] END
Dec 09 09:31:41 pi4blue k3s[356]: I1209 09:31:41.808174     356 trace.go:205] Trace[153983554]: "GuaranteedUpdate etcd3" type:*core.Event (09-Dec-2020 09:31:35.333) (total time: 6474ms):
Dec 09 09:31:41 pi4blue k3s[356]: Trace[153983554]: ---"Transaction committed" 5104ms (09:31:00.450)
Dec 09 09:31:41 pi4blue k3s[356]: Trace[153983554]: [6.474882891s] [6.474882891s] END
Dec 09 09:31:41 pi4blue k3s[356]: E1209 09:31:41.808442     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:41 pi4blue k3s[356]: I1209 09:31:41.810219     356 trace.go:205] Trace[1599385876]: "Patch" url:/api/v1/namespaces/monitoring/events/arm-exporter-k5hxx.164f00df0e1f9093,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:31:35.332) (total time: 6477ms):
Dec 09 09:31:41 pi4blue k3s[356]: Trace[1599385876]: ---"About to apply patch" 5106ms (09:31:00.450)
Dec 09 09:31:41 pi4blue k3s[356]: Trace[1599385876]: [6.477502099s] [6.477502099s] END
Dec 09 09:31:41 pi4blue k3s[356]: I1209 09:31:41.811097     356 trace.go:205] Trace[2073789542]: "GuaranteedUpdate etcd3" type:*core.Pod (09-Dec-2020 09:31:31.957) (total time: 9853ms):
Dec 09 09:31:41 pi4blue k3s[356]: Trace[2073789542]: ---"Transaction committed" 6827ms (09:31:00.791)
Dec 09 09:31:41 pi4blue k3s[356]: Trace[2073789542]: [9.853303435s] [9.853303435s] END
Dec 09 09:31:41 pi4blue k3s[356]: E1209 09:31:41.811325     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:41 pi4blue k3s[356]: I1209 09:31:41.812998     356 trace.go:205] Trace[1912054331]: "Patch" url:/api/v1/namespaces/monitoring/pods/arm-exporter-k5hxx/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:31:31.957) (total time: 9855ms):
Dec 09 09:31:41 pi4blue k3s[356]: Trace[1912054331]: ---"About to apply patch" 6828ms (09:31:00.791)
Dec 09 09:31:41 pi4blue k3s[356]: Trace[1912054331]: [9.855461738s] [9.855461738s] END
Dec 09 09:31:41 pi4blue k3s[356]: W1209 09:31:41.969097     356 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstable/podf6c8a21f-6d4f-4b51-987b-9e3a5176e07e/7a7c4631fe940f3d40bb608887bb8f3c4c513f57091d75f6a8fddffb1b8433c3 WatchSource:0}: task 7a7c4631fe940f3d40bb608887bb8f3c4c513f57091d75f6a8fddffb1b8433c3 not found: not found
Dec 09 09:31:42 pi4blue k3s[356]: time="2020-12-09T09:31:42.063855071Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:42 pi4blue k3s[356]: I1209 09:31:42.338541     356 trace.go:205] Trace[2082024636]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (09-Dec-2020 09:03:19.031) (total time: 26483ms):
Dec 09 09:31:42 pi4blue k3s[356]: Trace[2082024636]: ---"Transaction committed" 11360ms (09:31:00.217)
Dec 09 09:31:42 pi4blue k3s[356]: Trace[2082024636]: ---"Transaction committed" 5088ms (09:31:00.307)
Dec 09 09:31:42 pi4blue k3s[356]: Trace[2082024636]: ---"Transaction committed" 10029ms (09:31:00.338)
Dec 09 09:31:42 pi4blue k3s[356]: Trace[2082024636]: [26.48361266s] [26.48361266s] END
Dec 09 09:31:42 pi4blue k3s[356]: I1209 09:31:42.339024     356 trace.go:205] Trace[1150565008]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/monitoring/endpointslices/prometheus-adapter-gt2zh,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.022) (total time: 26492ms):
Dec 09 09:31:42 pi4blue k3s[356]: Trace[1150565008]: ---"Object stored in database" 26484ms (09:31:00.338)
Dec 09 09:31:42 pi4blue k3s[356]: Trace[1150565008]: [26.492564879s] [26.492564879s] END
Dec 09 09:31:42 pi4blue k3s[356]: I1209 09:31:42.785533     356 trace.go:205] Trace[760715598]: "Get" url:/api/v1/namespaces/monitoring/pods/alertmanager-main-0,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:31:41.996) (total time: 788ms):
Dec 09 09:31:42 pi4blue k3s[356]: Trace[760715598]: ---"About to write a response" 787ms (09:31:00.784)
Dec 09 09:31:42 pi4blue k3s[356]: Trace[760715598]: [788.40652ms] [788.40652ms] END
Dec 09 09:31:43 pi4blue k3s[356]: I1209 09:31:43.341669     356 trace.go:205] Trace[1917537226]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/local-path-provisioner-7ff9579c6/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (09-Dec-2020 09:03:12.509) (total time: 34007ms):
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1917537226]: [34.007989606s] [34.007989606s] END
Dec 09 09:31:43 pi4blue k3s[356]: I1209 09:31:43.343525     356 trace.go:205] Trace[693703614]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/traefik-5dd496474/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (09-Dec-2020 09:03:12.504) (total time: 34014ms):
Dec 09 09:31:43 pi4blue k3s[356]: Trace[693703614]: [34.014544652s] [34.014544652s] END
Dec 09 09:31:43 pi4blue k3s[356]: I1209 09:31:43.343930     356 trace.go:205] Trace[1720426180]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (09-Dec-2020 09:03:12.505) (total time: 34014ms):
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1720426180]: ---"Transaction committed" 17506ms (09:31:00.838)
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1720426180]: ---"Transaction committed" 5083ms (09:31:00.925)
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1720426180]: ---"Transaction committed" 5073ms (09:31:00.003)
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1720426180]: ---"Transaction committed" 5053ms (09:31:00.065)
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1720426180]: [34.014672592s] [34.014672592s] END
Dec 09 09:31:43 pi4blue k3s[356]: I1209 09:31:43.344172     356 trace.go:205] Trace[1435711665]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (09-Dec-2020 09:03:12.510) (total time: 34009ms):
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1435711665]: ---"Transaction committed" 13528ms (09:03:00.041)
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1435711665]: ---"Transaction prepared" 61ms (09:03:00.104)
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1435711665]: ---"Transaction committed" 5395ms (09:31:00.323)
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1435711665]: ---"Transaction committed" 5070ms (09:31:00.397)
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1435711665]: ---"Transaction committed" 5049ms (09:31:00.449)
Dec 09 09:31:43 pi4blue k3s[356]: Trace[1435711665]: [34.0098991s] [34.0098991s] END
Dec 09 09:31:43 pi4blue k3s[356]: W1209 09:31:43.589960     356 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod1e7d62e4-a24f-415b-85ae-0e89586e3fcc/7ba30d4ff4de9810ca9bd5ff4e8a0667e568e67aa038ee32caff856f7ad6ef47 WatchSource:0}: task 7ba30d4ff4de9810ca9bd5ff4e8a0667e568e67aa038ee32caff856f7ad6ef47 not found: not found
Dec 09 09:31:43 pi4blue k3s[356]: time="2020-12-09T09:31:43.814330068Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:31:43 pi4blue k3s[356]: time="2020-12-09T09:31:43.815104940Z" level=error msg="error in txn: database is locked"
Dec 09 09:31:43 pi4blue k3s[356]: I1209 09:31:43.819849     356 trace.go:205] Trace[887234575]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (09-Dec-2020 09:31:38.399) (total time: 5420ms):
Dec 09 09:31:43 pi4blue k3s[356]: Trace[887234575]: [5.420054201s] [5.420054201s] END
Dec 09 09:31:43 pi4blue k3s[356]: E1209 09:31:43.820074     356 controller.go:227] unable to sync kubernetes service: rpc error: code = Unknown desc = database is locked
Dec 09 09:31:44 pi4blue k3s[356]: time="2020-12-09T09:31:44.220199299Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:45 pi4blue k3s[356]: W1209 09:31:45.141506     356 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pode27ac682-b971-4edd-a9c0-81554fb77008/e1e74cc3bf1d486b0bde5510983baa53f164176ec46782dfbd765ab2e9cb8437 WatchSource:0}: task e1e74cc3bf1d486b0bde5510983baa53f164176ec46782dfbd765ab2e9cb8437 not found: not found
Dec 09 09:31:45 pi4blue k3s[356]: time="2020-12-09T09:31:45.875584639Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:46 pi4blue k3s[356]: E1209 09:31:46.450211     356 controller.go:178] failed to update node lease, error: Put "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4blue?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Dec 09 09:31:46 pi4blue k3s[356]: I1209 09:31:46.452020     356 trace.go:205] Trace[1999391231]: "GuaranteedUpdate etcd3" type:*coordination.Lease (09-Dec-2020 09:31:36.543) (total time: 9906ms):
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1999391231]: ---"Transaction committed" 6383ms (09:31:00.930)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1999391231]: [9.906429023s] [9.906429023s] END
Dec 09 09:31:46 pi4blue k3s[356]: E1209 09:31:46.453575     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:46 pi4blue k3s[356]: I1209 09:31:46.455423     356 trace.go:205] Trace[558592528]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4blue,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:31:36.543) (total time: 9912ms):
Dec 09 09:31:46 pi4blue k3s[356]: Trace[558592528]: [9.912191133s] [9.912191133s] END
Dec 09 09:31:46 pi4blue k3s[356]: E1209 09:31:46.461989     356 event.go:273] Unable to write event: 'Patch "https://127.0.0.1:6443/api/v1/namespaces/monitoring/events/prometheus-k8s-0.164f00e1716c8a97": read tcp 127.0.0.1:40248->127.0.0.1:6443: use of closed network connection' (may retry after sleeping)
Dec 09 09:31:46 pi4blue k3s[356]: I1209 09:31:46.476860     356 trace.go:205] Trace[1305855182]: "GuaranteedUpdate etcd3" type:*core.Event (09-Dec-2020 09:03:19.191) (total time: 30461ms):
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1305855182]: ---"initial value restored" 181ms (09:03:00.373)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1305855182]: ---"Transaction committed" 6791ms (09:03:00.166)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1305855182]: ---"Transaction committed" 5186ms (09:31:00.178)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1305855182]: ---"Transaction committed" 5052ms (09:31:00.232)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1305855182]: ---"Transaction committed" 5094ms (09:31:00.328)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1305855182]: ---"Transaction committed" 5297ms (09:31:00.631)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1305855182]: [30.461237477s] [30.461237477s] END
Dec 09 09:31:46 pi4blue k3s[356]: E1209 09:31:46.540461     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:46 pi4blue k3s[356]: I1209 09:31:46.541644     356 trace.go:205] Trace[1789066701]: "Patch" url:/api/v1/namespaces/monitoring/events/prometheus-k8s-0.164f00e1716c8a97,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:03:19.190) (total time: 30527ms):
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1789066701]: ---"About to apply patch" 182ms (09:03:00.373)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1789066701]: ---"About to apply patch" 6792ms (09:03:00.166)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1789066701]: ---"About to apply patch" 5186ms (09:31:00.178)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1789066701]: ---"About to apply patch" 5053ms (09:31:00.232)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1789066701]: ---"About to apply patch" 5095ms (09:31:00.329)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1789066701]: ---"About to apply patch" 5299ms (09:31:00.631)
Dec 09 09:31:46 pi4blue k3s[356]: Trace[1789066701]: [30.527295169s] [30.527295169s] END
Dec 09 09:31:46 pi4blue k3s[356]: W1209 09:31:46.888307     356 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstable/podb7d55518-1bbc-44aa-809b-68614c48868a/af9176713935f2bcfc0042b5b86648211442da72dfe5254b79539a53c5f51762 WatchSource:0}: task af9176713935f2bcfc0042b5b86648211442da72dfe5254b79539a53c5f51762 not found: not found
Dec 09 09:31:47 pi4blue k3s[356]: time="2020-12-09T09:31:47.405241861Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:31:48 pi4blue k3s[356]: time="2020-12-09T09:31:48.150450438Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:48 pi4blue k3s[356]: time="2020-12-09T09:31:48.812035005Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.798161     356 trace.go:205] Trace[267420908]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (09-Dec-2020 09:03:18.977) (total time: 33996ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[267420908]: ---"Transaction committed" 7156ms (09:03:00.139)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[267420908]: ---"Transaction committed" 5268ms (09:31:00.240)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[267420908]: ---"Transaction committed" 5067ms (09:31:00.309)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[267420908]: ---"Transaction committed" 5150ms (09:31:00.461)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[267420908]: ---"Transaction committed" 9134ms (09:31:00.597)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[267420908]: [33.996916536s] [33.996916536s] END
Dec 09 09:31:49 pi4blue k3s[356]: E1209 09:31:49.798411     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.800002     356 trace.go:205] Trace[37682735]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/monitoring/endpointslices/arm-exporter-llbvn,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:03:18.973) (total time: 34002ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[37682735]: [34.002499194s] [34.002499194s] END
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.811125     356 trace.go:205] Trace[562729221]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (09-Dec-2020 09:03:19.016) (total time: 33970ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[562729221]: ---"Transaction committed" 7094ms (09:03:00.114)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[562729221]: ---"Transaction committed" 5354ms (09:31:00.294)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[562729221]: ---"Transaction committed" 5537ms (09:31:00.833)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[562729221]: ---"Transaction committed" 5088ms (09:31:00.923)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[562729221]: ---"Transaction committed" 5378ms (09:31:00.303)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[562729221]: ---"Transaction committed" 5130ms (09:31:00.436)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[562729221]: [33.970721228s] [33.970721228s] END
Dec 09 09:31:49 pi4blue k3s[356]: E1209 09:31:49.811302     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.812919     356 trace.go:205] Trace[1573487446]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/monitoring/endpointslices/alertmanager-operated-p58lg,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:03:18.986) (total time: 34002ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[1573487446]: [34.002458154s] [34.002458154s] END
Dec 09 09:31:49 pi4blue k3s[356]: W1209 09:31:49.816359     356 endpointslice_controller.go:284] Error syncing endpoint slices for service "monitoring/alertmanager-operated", retrying. Error: failed to update alertmanager-operated-p58lg EndpointSlice for Service monitoring/alertmanager-operated: context deadline exceeded
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.818466     356 event.go:291] "Event occurred" object="monitoring/alertmanager-operated" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service monitoring/alertmanager-operated: failed to update alertmanager-operated-p58lg EndpointSlice for Service monitoring/alertmanager-operated: context deadline exceeded"
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.830853     356 event.go:291] "Event occurred" object="monitoring/arm-exporter" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service monitoring/arm-exporter: failed to update arm-exporter-llbvn EndpointSlice for Service monitoring/arm-exporter: context deadline exceeded"
Dec 09 09:31:49 pi4blue k3s[356]: W1209 09:31:49.831477     356 endpointslice_controller.go:284] Error syncing endpoint slices for service "monitoring/arm-exporter", retrying. Error: failed to update arm-exporter-llbvn EndpointSlice for Service monitoring/arm-exporter: context deadline exceeded
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.851970     356 trace.go:205] Trace[585311047]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (09-Dec-2020 09:03:19.034) (total time: 33993ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[585311047]: ---"Transaction committed" 11375ms (09:31:00.238)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[585311047]: ---"Transaction committed" 5055ms (09:31:00.296)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[585311047]: ---"Transaction committed" 5041ms (09:31:00.339)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[585311047]: ---"Transaction committed" 5435ms (09:31:00.779)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[585311047]: ---"Transaction committed" 5169ms (09:31:00.953)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[585311047]: [33.993120856s] [33.993120856s] END
Dec 09 09:31:49 pi4blue k3s[356]: E1209 09:31:49.852157     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.853798     356 trace.go:205] Trace[1877291499]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/monitoring/endpointslices/node-exporter-5lr68,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.027) (total time: 34002ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[1877291499]: [34.002284421s] [34.002284421s] END
Dec 09 09:31:49 pi4blue k3s[356]: W1209 09:31:49.855981     356 endpointslice_controller.go:284] Error syncing endpoint slices for service "monitoring/node-exporter", retrying. Error: failed to update node-exporter-5lr68 EndpointSlice for Service monitoring/node-exporter: context deadline exceeded
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.856269     356 event.go:291] "Event occurred" object="monitoring/node-exporter" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service monitoring/node-exporter: failed to update node-exporter-5lr68 EndpointSlice for Service monitoring/node-exporter: context deadline exceeded"
Dec 09 09:31:49 pi4blue k3s[356]: W1209 09:31:49.926386     356 handler_proxy.go:102] no RequestInfo found in the context
Dec 09 09:31:49 pi4blue k3s[356]: E1209 09:31:49.926636     356 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Dec 09 09:31:49 pi4blue k3s[356]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.926713     356 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.968323     356 trace.go:205] Trace[941321361]: "GuaranteedUpdate etcd3" type:*apps.DaemonSet (09-Dec-2020 09:03:19.191) (total time: 33952ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[941321361]: ---"Transaction committed" 6942ms (09:03:00.150)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[941321361]: ---"Transaction committed" 5302ms (09:31:00.279)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[941321361]: ---"Transaction committed" 5049ms (09:31:00.333)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[941321361]: ---"Transaction committed" 5078ms (09:31:00.424)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[941321361]: ---"Transaction committed" 5493ms (09:31:00.921)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[941321361]: [33.952365656s] [33.952365656s] END
Dec 09 09:31:49 pi4blue k3s[356]: E1209 09:31:49.968517     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.969686     356 trace.go:205] Trace[647737474]: "GuaranteedUpdate etcd3" type:*apps.DaemonSet (09-Dec-2020 09:03:19.162) (total time: 33982ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[647737474]: ---"Transaction prepared" 42ms (09:03:00.205)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[647737474]: ---"Transaction committed" 6948ms (09:03:00.154)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[647737474]: ---"Transaction committed" 5137ms (09:31:00.118)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[647737474]: ---"Transaction committed" 5089ms (09:31:00.211)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[647737474]: ---"Transaction committed" 5142ms (09:31:00.357)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[647737474]: ---"Transaction committed" 5408ms (09:31:00.779)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[647737474]: ---"Retry value restored" 36ms (09:31:00.816)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[647737474]: ---"Transaction committed" 5129ms (09:31:00.951)
Dec 09 09:31:49 pi4blue k3s[356]: Trace[647737474]: [33.982999662s] [33.982999662s] END
Dec 09 09:31:49 pi4blue k3s[356]: E1209 09:31:49.969847     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.970836     356 trace.go:205] Trace[1587735439]: "Update" url:/apis/apps/v1/namespaces/monitoring/daemonsets/node-exporter/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:daemon-set-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.143) (total time: 34003ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[1587735439]: [34.003158516s] [34.003158516s] END
Dec 09 09:31:49 pi4blue k3s[356]: I1209 09:31:49.971427     356 trace.go:205] Trace[673958610]: "Update" url:/apis/apps/v1/namespaces/monitoring/daemonsets/arm-exporter/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:daemon-set-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.145) (total time: 34002ms):
Dec 09 09:31:49 pi4blue k3s[356]: Trace[673958610]: [34.002341106s] [34.002341106s] END
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.073474     356 trace.go:205] Trace[969480979]: "GuaranteedUpdate etcd3" type:*apps.StatefulSet (09-Dec-2020 09:03:19.252) (total time: 33997ms):
Dec 09 09:31:50 pi4blue k3s[356]: Trace[969480979]: ---"Transaction committed" 6717ms (09:03:00.992)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[969480979]: ---"Transaction prepared" 160ms (09:03:00.153)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[969480979]: ---"Transaction committed" 5315ms (09:31:00.292)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[969480979]: ---"Transaction committed" 5042ms (09:31:00.340)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[969480979]: ---"Transaction committed" 5103ms (09:31:00.450)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[969480979]: ---"Transaction committed" 8059ms (09:31:00.517)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[969480979]: ---"Transaction prepared" 100ms (09:31:00.619)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[969480979]: [33.997457923s] [33.997457923s] END
Dec 09 09:31:50 pi4blue k3s[356]: E1209 09:31:50.073687     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.075240     356 trace.go:205] Trace[832958749]: "Update" url:/apis/apps/v1/namespaces/monitoring/statefulsets/prometheus-k8s/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:statefulset-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.247) (total time: 34004ms):
Dec 09 09:31:50 pi4blue k3s[356]: Trace[832958749]: [34.004004024s] [34.004004024s] END
Dec 09 09:31:50 pi4blue k3s[356]: E1209 09:31:50.095757     356 stateful_set.go:392] error syncing StatefulSet monitoring/prometheus-k8s, requeuing: context deadline exceeded
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.096032     356 trace.go:205] Trace[905424598]: "GuaranteedUpdate etcd3" type:*apiregistration.APIService (09-Dec-2020 09:03:26.048) (total time: 27223ms):
Dec 09 09:31:50 pi4blue k3s[356]: Trace[905424598]: ---"Transaction committed" 5265ms (09:31:00.139)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[905424598]: ---"Transaction committed" 5060ms (09:31:00.201)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[905424598]: ---"Transaction committed" 5179ms (09:31:00.381)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[905424598]: ---"Transaction committed" 11710ms (09:31:00.095)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[905424598]: [27.223346157s] [27.223346157s] END
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.098049     356 trace.go:205] Trace[1070183607]: "Update" url:/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:03:25.965) (total time: 27307ms):
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1070183607]: ---"Object stored in database" 27225ms (09:31:00.096)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1070183607]: [27.307976884s] [27.307976884s] END
Dec 09 09:31:50 pi4blue k3s[356]: E1209 09:31:50.100239     356 daemon_controller.go:320] monitoring/arm-exporter failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"arm-exporter", GenerateName:"", Namespace:"monitoring", SelfLink:"/apis/apps/v1/namespaces/monitoring/daemonsets/arm-exporter", UID:"3eabcd88-0746-4230-b050-2e9513230ae2", ResourceVersion:"14266", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63743042993, loc:(*time.Location)(0x57bdca8)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"arm-exporter"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"arm-exporter\"},\"name\":\"arm-exporter\",\"namespace\":\"monitoring\"},\"spec\":{\"selector\":{\"matchLabels\":{\"k8s-app\":\"arm-exporter\"}},\"template\":{\"metadata\":{\"labels\":{\"k8s-app\":\"arm-exporter\"}},\"spec\":{\"affinity\":{\"nodeAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":{\"nodeSelectorTerms\":[{\"matchExpressions\":[{\"key\":\"kubernetes.io/arch\",\"operator\":\"In\",\"values\":[\"arm\",\"arm64\"]}]}]}}},\"containers\":[{\"command\":[\"/bin/rpi_exporter\",\"--web.listen-address=127.0.0.1:9243\"],\"image\":\"carlosedp/arm_exporter:latest\",\"name\":\"arm-exporter\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"},\"requests\":{\"cpu\":\"50m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"privileged\":true}},{\"args\":[\"--secure-listen-address=$(IP):9243\",\"--upstream=http://127.0.0.1:9243/\",\"--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256\"],\"env\":[{\"name\":\"IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"carlosedp/kube-rbac-proxy:v0.5.0\",\"name\":\"kube-rbac-proxy\",\"ports\":[{\"containerPort\":9243,\"hostPort\":9243,\"name\":\"https\"}],\"resources\":{\"limits\":{\"cpu\":\"20m\",\"memory\":\"40Mi\"},\"requests\":{\"cpu\":\"10m\",\"memory\":\"20Mi\"}}}],\"serviceAccountName\":\"arm-exporter\",\"tolerations\":[{\"operator\":\"Exists\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x202c2360), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xced44d0)}, v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x202c2380), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xced44e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xced44f0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"arm-exporter"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"arm-exporter", Image:"carlosedp/arm_exporter:latest", Command:[]string{"/bin/rpi_exporter", "--web.listen-address=127.0.0.1:9243"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:50, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(0x7eef2f0), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"kube-rbac-proxy", Image:"carlosedp/kube-rbac-proxy:v0.5.0", Command:[]string(nil), Args:[]string{"--secure-listen-address=$(IP):9243", "--upstream=http://127.0.0.1:9243/", "--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"https", HostPort:9243, ContainerPort:9243, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xced4530)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:20, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:41943040, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x18f6c148), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"arm-exporter", DeprecatedServiceAccount:"arm-exporter", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x202a6a40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xced4550), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xf6d8d48)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x18f6c1e0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: context deadline exceeded
Dec 09 09:31:50 pi4blue k3s[356]: E1209 09:31:50.100339     356 available_controller.go:490] v1beta1.metrics.k8s.io failed with: timed out waiting for https://10.43.190.210:443/apis/metrics.k8s.io/v1beta1
Dec 09 09:31:50 pi4blue k3s[356]: E1209 09:31:50.115796     356 daemon_controller.go:320] monitoring/node-exporter failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"node-exporter", GenerateName:"", Namespace:"monitoring", SelfLink:"/apis/apps/v1/namespaces/monitoring/daemonsets/node-exporter", UID:"10a37fd4-e73e-4230-81a8-67602972e3ac", ResourceVersion:"14215", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63743043007, loc:(*time.Location)(0x57bdca8)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"node-exporter", "app.kubernetes.io/version":"v0.18.1"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/name\":\"node-exporter\",\"app.kubernetes.io/version\":\"v0.18.1\"},\"name\":\"node-exporter\",\"namespace\":\"monitoring\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app.kubernetes.io/name\":\"node-exporter\"}},\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/name\":\"node-exporter\",\"app.kubernetes.io/version\":\"v0.18.1\"}},\"spec\":{\"containers\":[{\"args\":[\"--web.listen-address=127.0.0.1:9100\",\"--path.procfs=/host/proc\",\"--path.sysfs=/host/sys\",\"--path.rootfs=/host/root\",\"--no-collector.wifi\",\"--no-collector.hwmon\",\"--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)\"],\"image\":\"prom/node-exporter:v0.18.1\",\"name\":\"node-exporter\",\"resources\":{\"limits\":{\"cpu\":\"250m\",\"memory\":\"180Mi\"},\"requests\":{\"cpu\":\"102m\",\"memory\":\"180Mi\"}},\"volumeMounts\":[{\"mountPath\":\"/host/proc\",\"name\":\"proc\",\"readOnly\":false},{\"mountPath\":\"/host/sys\",\"name\":\"sys\",\"readOnly\":false},{\"mountPath\":\"/host/root\",\"mountPropagation\":\"HostToContainer\",\"name\":\"root\",\"readOnly\":true}]},{\"args\":[\"--logtostderr\",\"--secure-listen-address=[$(IP)]:9100\",\"--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256\",\"--upstream=http://127.0.0.1:9100/\"],\"env\":[{\"name\":\"IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"carlosedp/kube-rbac-proxy:v0.5.0\",\"name\":\"kube-rbac-proxy\",\"ports\":[{\"containerPort\":9100,\"hostPort\":9100,\"name\":\"https\"}],\"resources\":{\"limits\":{\"cpu\":\"20m\",\"memory\":\"40Mi\"},\"requests\":{\"cpu\":\"10m\",\"memory\":\"20Mi\"}}}],\"hostNetwork\":true,\"hostPID\":true,\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":65534},\"serviceAccountName\":\"node-exporter\",\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/proc\"},\"name\":\"proc\"},{\"hostPath\":{\"path\":\"/sys\"},\"name\":\"sys\"},{\"hostPath\":{\"path\":\"/\"},\"name\":\"root\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x202c2060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xced4420)}, v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x202c2080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xced4440)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xced4450), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"node-exporter", "app.kubernetes.io/version":"v0.18.1"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"proc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xced4460), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"sys", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xced4470), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"root", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xced4480), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"node-exporter", Image:"prom/node-exporter:v0.18.1", Command:[]string(nil), Args:[]string{"--web.listen-address=127.0.0.1:9100", "--path.procfs=/host/proc", "--path.sysfs=/host/sys", "--path.rootfs=/host/root", "--no-collector.wifi", "--no-collector.hwmon", "--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:250, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"250m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:188743680, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"180Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:102, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"102m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:188743680, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"180Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"proc", ReadOnly:false, MountPath:"/host/proc", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"sys", ReadOnly:false, MountPath:"/host/sys", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"root", ReadOnly:true, MountPath:"/host/root", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(0xf6d8d18), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"kube-rbac-proxy", Image:"carlosedp/kube-rbac-proxy:v0.5.0", Command:[]string(nil), Args:[]string{"--logtostderr", "--secure-listen-address=[$(IP)]:9100", "--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "--upstream=http://127.0.0.1:9100/"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"https", HostPort:9100, ContainerPort:9100, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xced44b0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:20, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:41943040, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x185d7f78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"node-exporter", DeprecatedServiceAccount:"node-exporter", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:true, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x202a6a00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xf6d8d28)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x185d7fd8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: context deadline exceeded
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.191326     356 trace.go:205] Trace[1870460093]: "GuaranteedUpdate etcd3" type:*core.Endpoints (09-Dec-2020 09:03:19.367) (total time: 33999ms):
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1870460093]: ---"Transaction committed" 6718ms (09:03:00.089)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1870460093]: ---"Transaction committed" 5108ms (09:31:00.023)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1870460093]: ---"Transaction committed" 5071ms (09:31:00.096)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1870460093]: ---"Transaction committed" 5075ms (09:31:00.173)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1870460093]: ---"Transaction committed" 5458ms (09:31:00.635)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1870460093]: ---"Transaction committed" 5278ms (09:31:00.916)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1870460093]: [33.999648971s] [33.999648971s] END
Dec 09 09:31:50 pi4blue k3s[356]: E1209 09:31:50.191423     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.192125     356 trace.go:205] Trace[1839992794]: "Update" url:/api/v1/namespaces/monitoring/endpoints/alertmanager-operated,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.367) (total time: 34001ms):
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1839992794]: [34.001069884s] [34.001069884s] END
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.193587     356 event.go:291] "Event occurred" object="monitoring/alertmanager-operated" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint monitoring/alertmanager-operated: context deadline exceeded"
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.241927     356 trace.go:205] Trace[1784214064]: "GuaranteedUpdate etcd3" type:*core.Endpoints (09-Dec-2020 09:03:19.424) (total time: 33993ms):
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1784214064]: ---"Transaction committed" 6755ms (09:03:00.182)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1784214064]: ---"Transaction committed" 5322ms (09:31:00.330)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1784214064]: ---"Transaction committed" 5077ms (09:31:00.408)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1784214064]: ---"Transaction committed" 5137ms (09:31:00.546)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1784214064]: ---"Transaction committed" 5481ms (09:31:00.030)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1784214064]: ---"Transaction committed" 5162ms (09:31:00.195)
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1784214064]: [33.993323313s] [33.993323313s] END
Dec 09 09:31:50 pi4blue k3s[356]: E1209 09:31:50.243072     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.243768     356 trace.go:205] Trace[1382100698]: "Update" url:/api/v1/namespaces/monitoring/endpoints/prometheus-operator,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:03:19.417) (total time: 34002ms):
Dec 09 09:31:50 pi4blue k3s[356]: Trace[1382100698]: [34.002147366s] [34.002147366s] END
Dec 09 09:31:50 pi4blue k3s[356]: I1209 09:31:50.245357     356 event.go:291] "Event occurred" object="monitoring/prometheus-operator" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint monitoring/prometheus-operator: context deadline exceeded"
Dec 09 09:31:51 pi4blue k3s[356]: time="2020-12-09T09:31:51.334400025Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:31:51 pi4blue k3s[356]: I1209 09:31:51.809143     356 trace.go:205] Trace[1473493818]: "GuaranteedUpdate etcd3" type:*coordination.Lease (09-Dec-2020 09:31:41.906) (total time: 9902ms):
Dec 09 09:31:51 pi4blue k3s[356]: Trace[1473493818]: ---"Transaction committed" 5471ms (09:31:00.380)
Dec 09 09:31:51 pi4blue k3s[356]: Trace[1473493818]: [9.902375919s] [9.902375919s] END
Dec 09 09:31:51 pi4blue k3s[356]: E1209 09:31:51.809242     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:51 pi4blue k3s[356]: I1209 09:31:51.809994     356 trace.go:205] Trace[13426160]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4red,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:31:41.906) (total time: 9903ms):
Dec 09 09:31:51 pi4blue k3s[356]: Trace[13426160]: [9.903818163s] [9.903818163s] END
Dec 09 09:31:51 pi4blue k3s[356]: I1209 09:31:51.822584     356 trace.go:205] Trace[1333176469]: "GuaranteedUpdate etcd3" type:*core.Pod (09-Dec-2020 09:31:42.973) (total time: 8848ms):
Dec 09 09:31:51 pi4blue k3s[356]: Trace[1333176469]: ---"Transaction committed" 5942ms (09:31:00.926)
Dec 09 09:31:51 pi4blue k3s[356]: Trace[1333176469]: [8.848671209s] [8.848671209s] END
Dec 09 09:31:51 pi4blue k3s[356]: E1209 09:31:51.822699     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:51 pi4blue k3s[356]: I1209 09:31:51.823430     356 trace.go:205] Trace[760923484]: "Patch" url:/api/v1/namespaces/monitoring/pods/alertmanager-main-0/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:31:42.972) (total time: 8850ms):
Dec 09 09:31:51 pi4blue k3s[356]: Trace[760923484]: ---"About to apply patch" 5967ms (09:31:00.950)
Dec 09 09:31:51 pi4blue k3s[356]: Trace[760923484]: [8.850441762s] [8.850441762s] END
Dec 09 09:31:51 pi4blue k3s[356]: time="2020-12-09T09:31:51.826979888Z" level=info msg="Cluster-Http-Server 2020/12/09 09:31:51 http: TLS handshake error from 192.168.1.143:55866: EOF"
Dec 09 09:31:51 pi4blue k3s[356]: time="2020-12-09T09:31:51.834821988Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:52 pi4blue k3s[356]: time="2020-12-09T09:31:52.481946118Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:52 pi4blue k3s[356]: time="2020-12-09T09:31:52.703405112Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:31:52 pi4blue k3s[356]: time="2020-12-09T09:31:52.992861958Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:31:53 pi4blue k3s[356]: time="2020-12-09T09:31:53.404240865Z" level=error msg="error in txn: database is locked"
Dec 09 09:31:53 pi4blue k3s[356]: I1209 09:31:53.405278     356 trace.go:205] Trace[459518302]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (09-Dec-2020 09:31:48.351) (total time: 5054ms):
Dec 09 09:31:53 pi4blue k3s[356]: Trace[459518302]: [5.054035077s] [5.054035077s] END
Dec 09 09:31:53 pi4blue k3s[356]: E1209 09:31:53.405876     356 controller.go:227] unable to sync kubernetes service: rpc error: code = Unknown desc = database is locked
Dec 09 09:31:53 pi4blue k3s[356]: I1209 09:31:53.884082     356 trace.go:205] Trace[2025179559]: "GuaranteedUpdate etcd3" type:*core.Event (09-Dec-2020 09:31:47.147) (total time: 6736ms):
Dec 09 09:31:53 pi4blue k3s[356]: Trace[2025179559]: ---"initial value restored" 215ms (09:31:00.363)
Dec 09 09:31:53 pi4blue k3s[356]: Trace[2025179559]: ---"Transaction committed" 6517ms (09:31:00.883)
Dec 09 09:31:53 pi4blue k3s[356]: Trace[2025179559]: [6.736348221s] [6.736348221s] END
Dec 09 09:31:53 pi4blue k3s[356]: I1209 09:31:53.884963     356 trace.go:205] Trace[1858317852]: "Patch" url:/api/v1/namespaces/monitoring/events/prometheus-k8s-0.164f00e1716c8a97,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:31:47.147) (total time: 6737ms):
Dec 09 09:31:53 pi4blue k3s[356]: Trace[1858317852]: ---"About to apply patch" 216ms (09:31:00.363)
Dec 09 09:31:53 pi4blue k3s[356]: Trace[1858317852]: ---"Object stored in database" 6518ms (09:31:00.884)
Dec 09 09:31:53 pi4blue k3s[356]: Trace[1858317852]: [6.737462714s] [6.737462714s] END
Dec 09 09:31:53 pi4blue k3s[356]: time="2020-12-09T09:31:53.980587562Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:53 pi4blue k3s[356]: time="2020-12-09T09:31:53.983739565Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:31:54 pi4blue k3s[356]: time="2020-12-09T09:31:54.217212809Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:31:54 pi4blue k3s[356]: time="2020-12-09T09:31:54.460792861Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:31:55 pi4blue k3s[356]: time="2020-12-09T09:31:55.095123697Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:55 pi4blue k3s[356]: time="2020-12-09T09:31:55.115861710Z" level=error msg="error in txn: database is locked"
Dec 09 09:31:55 pi4blue k3s[356]: E1209 09:31:55.117427     356 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{Code:2, Message:"database is locked", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
Dec 09 09:31:55 pi4blue k3s[356]: I1209 09:31:55.119735     356 trace.go:205] Trace[2052093136]: "Create" url:/api/v1/namespaces/monitoring/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:31:49.827) (total time: 5292ms):
Dec 09 09:31:55 pi4blue k3s[356]: Trace[2052093136]: [5.292259442s] [5.292259442s] END
Dec 09 09:31:55 pi4blue k3s[356]: E1209 09:31:55.135105     356 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"alertmanager-operated.164f0278cacc35b3", GenerateName:"", Namespace:"monitoring", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Service", Namespace:"monitoring", Name:"alertmanager-operated", UID:"f5c63bb3-4b5b-4bbb-bd5e-c4cbf7181588", APIVersion:"v1", ResourceVersion:"1916", FieldPath:""}, Reason:"FailedToUpdateEndpointSlices", Message:"Error updating Endpoint Slices for Service monitoring/alertmanager-operated: failed to update alertmanager-operated-p58lg EndpointSlice for Service monitoring/alertmanager-operated: context deadline exceeded", Source:v1.EventSource{Component:"endpoint-slice-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfec444170a743b3, ext:98529030394, loc:(*time.Location)(0x57bdca8)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfec444170a743b3, ext:98529030394, loc:(*time.Location)(0x57bdca8)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = database is locked' (will not retry!)
Dec 09 09:31:55 pi4blue k3s[356]: time="2020-12-09T09:31:55.214917880Z" level=error msg="error in txn: database is locked"
Dec 09 09:31:55 pi4blue k3s[356]: E1209 09:31:55.216578     356 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{Code:2, Message:"database is locked", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
Dec 09 09:31:55 pi4blue k3s[356]: I1209 09:31:55.218925     356 trace.go:205] Trace[212821778]: "Create" url:/api/v1/namespaces/monitoring/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:31:50.198) (total time: 5020ms):
Dec 09 09:31:55 pi4blue k3s[356]: Trace[212821778]: [5.020733698s] [5.020733698s] END
Dec 09 09:31:55 pi4blue k3s[356]: E1209 09:31:55.223012     356 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"alertmanager-operated.164f0278e145234d", GenerateName:"", Namespace:"monitoring", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Endpoints", Namespace:"monitoring", Name:"alertmanager-operated", UID:"fb9a0166-863f-4310-846d-e7bb0351881b", APIVersion:"v1", ResourceVersion:"14172", FieldPath:""}, Reason:"FailedToUpdateEndpoint", Message:"Failed to update endpoint monitoring/alertmanager-operated: context deadline exceeded", Source:v1.EventSource{Component:"endpoint-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfec44418b85674d, ext:98906053607, loc:(*time.Location)(0x57bdca8)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfec44418b85674d, ext:98906053607, loc:(*time.Location)(0x57bdca8)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = database is locked' (will not retry!)
Dec 09 09:31:55 pi4blue k3s[356]: E1209 09:31:55.761728     356 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Dec 09 09:31:56 pi4blue k3s[356]: I1209 09:31:56.584427     356 trace.go:205] Trace[1595646836]: "GuaranteedUpdate etcd3" type:*coordination.Lease (09-Dec-2020 09:31:47.215) (total time: 9369ms):
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1595646836]: ---"Transaction committed" 5226ms (09:31:00.444)
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1595646836]: [9.369140623s] [9.369140623s] END
Dec 09 09:31:56 pi4blue k3s[356]: E1209 09:31:56.584624     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:56 pi4blue k3s[356]: I1209 09:31:56.586076     356 trace.go:205] Trace[226525343]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4blue,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:31:47.181) (total time: 9404ms):
Dec 09 09:31:56 pi4blue k3s[356]: Trace[226525343]: [9.404791835s] [9.404791835s] END
Dec 09 09:31:56 pi4blue k3s[356]: E1209 09:31:56.584474     356 controller.go:178] failed to update node lease, error: Put "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4blue?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Dec 09 09:31:56 pi4blue k3s[356]: I1209 09:31:56.598491     356 trace.go:205] Trace[1059112308]: "GuaranteedUpdate etcd3" type:*core.Event (09-Dec-2020 09:31:53.903) (total time: 2694ms):
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1059112308]: [2.694432266s] [2.694432266s] END
Dec 09 09:31:56 pi4blue k3s[356]: E1209 09:31:56.598641     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:31:56 pi4blue k3s[356]: E1209 09:31:56.599876     356 event.go:273] Unable to write event: 'Patch "https://127.0.0.1:6443/api/v1/namespaces/monitoring/events/prometheus-adapter-585b57857b-s4t9n.164f00e171619609": read tcp 127.0.0.1:40684->127.0.0.1:6443: use of closed network connection' (may retry after sleeping)
Dec 09 09:31:56 pi4blue k3s[356]: I1209 09:31:56.600754     356 trace.go:205] Trace[332215568]: "Patch" url:/api/v1/namespaces/monitoring/events/prometheus-adapter-585b57857b-s4t9n.164f00e171619609,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:31:53.903) (total time: 2696ms):
Dec 09 09:31:56 pi4blue k3s[356]: Trace[332215568]: [2.696730752s] [2.696730752s] END
Dec 09 09:31:56 pi4blue k3s[356]: I1209 09:31:56.920425     356 trace.go:205] Trace[1748853751]: "Update" url:/apis/apps/v1/namespaces/monitoring/deployments/kube-state-metrics/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (09-Dec-2020 09:03:26.090) (total time: 34005ms):
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1748853751]: [34.005941139s] [34.005941139s] END
Dec 09 09:31:56 pi4blue k3s[356]: I1209 09:31:56.934577     356 trace.go:205] Trace[1716033225]: "GuaranteedUpdate etcd3" type:*apps.Deployment (09-Dec-2020 09:03:26.091) (total time: 34019ms):
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1716033225]: ---"Transaction committed" 5223ms (09:31:00.143)
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1716033225]: ---"Transaction committed" 5068ms (09:31:00.215)
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1716033225]: ---"Transaction committed" 5156ms (09:31:00.376)
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1716033225]: ---"Transaction committed" 6810ms (09:31:00.196)
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1716033225]: ---"Transaction prepared" 28ms (09:31:00.225)
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1716033225]: ---"Transaction committed" 5111ms (09:31:00.336)
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1716033225]: ---"Transaction committed" 5027ms (09:31:00.367)
Dec 09 09:31:56 pi4blue k3s[356]: Trace[1716033225]: [34.019207283s] [34.019207283s] END
Dec 09 09:31:56 pi4blue k3s[356]: I1209 09:31:56.986354     356 request.go:645] Throttling request took 1.065298999s, request: GET:https://127.0.0.1:6444/apis/coordination.k8s.io/v1beta1?timeout=32s
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.096957     356 trace.go:205] Trace[772588316]: "GuaranteedUpdate etcd3" type:*core.Endpoints (09-Dec-2020 09:03:26.264) (total time: 34008ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[772588316]: ---"Transaction prepared" 39ms (09:03:00.304)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[772588316]: ---"Transaction committed" 5150ms (09:31:00.278)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[772588316]: ---"Transaction committed" 5053ms (09:31:00.333)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[772588316]: ---"Transaction committed" 5120ms (09:31:00.455)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[772588316]: ---"Transaction committed" 5381ms (09:31:00.838)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[772588316]: ---"Transaction prepared" 30ms (09:31:00.869)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[772588316]: ---"Transaction committed" 7474ms (09:31:00.343)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[772588316]: ---"Transaction committed" 5057ms (09:31:00.403)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[772588316]: [34.008837857s] [34.008837857s] END
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.099579     356 trace.go:205] Trace[97480765]: "Update" url:/api/v1/namespaces/monitoring/endpoints/alertmanager-main,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:03:26.263) (total time: 34012ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[97480765]: [34.012160765s] [34.012160765s] END
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.120624     356 trace.go:205] Trace[617161504]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-dns-prometheus-discovery,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:03:26.289) (total time: 34006ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[617161504]: [34.00689621s] [34.00689621s] END
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.129605     356 trace.go:205] Trace[141748108]: "GuaranteedUpdate etcd3" type:*core.Endpoints (09-Dec-2020 09:03:26.291) (total time: 34014ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[141748108]: ---"Transaction committed" 5272ms (09:31:00.389)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[141748108]: ---"Transaction committed" 5042ms (09:31:00.433)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[141748108]: ---"Transaction committed" 5088ms (09:31:00.522)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[141748108]: ---"Transaction committed" 5371ms (09:31:00.894)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[141748108]: ---"Transaction prepared" 26ms (09:31:00.921)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[141748108]: ---"Transaction committed" 5277ms (09:31:00.199)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[141748108]: ---"Transaction committed" 5019ms (09:31:00.219)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[141748108]: [34.014648478s] [34.014648478s] END
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.133451     356 event.go:291] "Event occurred" object="monitoring/alertmanager-main" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint monitoring/alertmanager-main: Timeout: request did not complete within requested timeout 34s"
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.138007     356 event.go:291] "Event occurred" object="kube-system/kube-dns-prometheus-discovery" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns-prometheus-discovery: Timeout: request did not complete within requested timeout 34s"
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.333248     356 trace.go:205] Trace[1655048550]: "GuaranteedUpdate etcd3" type:*apps.Deployment (09-Dec-2020 09:03:26.510) (total time: 33999ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1655048550]: ---"Transaction committed" 5128ms (09:31:00.478)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1655048550]: ---"Transaction committed" 5049ms (09:31:00.531)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1655048550]: ---"Transaction committed" 5089ms (09:31:00.623)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1655048550]: ---"Transaction committed" 5204ms (09:31:00.830)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1655048550]: ---"Transaction committed" 5300ms (09:31:00.135)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1655048550]: [33.999389105s] [33.999389105s] END
Dec 09 09:31:57 pi4blue k3s[356]: E1209 09:31:57.333348     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.334219     356 trace.go:205] Trace[139111418]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/metrics-server/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (09-Dec-2020 09:03:26.509) (total time: 34001ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[139111418]: [34.001248768s] [34.001248768s] END
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.371270     356 trace.go:205] Trace[1539771805]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (09-Dec-2020 09:03:26.547) (total time: 33999ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: ---"Transaction prepared" 86ms (09:03:00.634)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: ---"Transaction committed" 5069ms (09:31:00.528)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: ---"Transaction committed" 5076ms (09:31:00.608)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: ---"Transaction committed" 5135ms (09:31:00.747)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: ---"Transaction committed" 5474ms (09:31:00.225)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: ---"Retry value restored" 24ms (09:31:00.249)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: ---"Transaction prepared" 34ms (09:31:00.284)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: ---"Transaction committed" 5114ms (09:31:00.399)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: ---"Transaction committed" 5022ms (09:31:00.425)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1539771805]: [33.999513242s] [33.999513242s] END
Dec 09 09:31:57 pi4blue k3s[356]: E1209 09:31:57.371354     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.372000     356 trace.go:205] Trace[5223220]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/coredns-66c464876b/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (09-Dec-2020 09:03:26.547) (total time: 34001ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[5223220]: [34.001002006s] [34.001002006s] END
Dec 09 09:31:57 pi4blue k3s[356]: time="2020-12-09T09:31:57.460076438Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.578658     356 trace.go:205] Trace[347770871]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (09-Dec-2020 09:03:26.759) (total time: 33995ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[347770871]: ---"Transaction committed" 5085ms (09:31:00.679)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[347770871]: ---"Transaction committed" 5107ms (09:31:00.799)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[347770871]: ---"Transaction committed" 5139ms (09:31:00.948)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[347770871]: ---"Transaction committed" 5165ms (09:31:00.124)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[347770871]: ---"Transaction prepared" 62ms (09:31:00.188)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[347770871]: ---"Transaction committed" 5248ms (09:31:00.436)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[347770871]: ---"Transaction committed" 5020ms (09:31:00.466)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[347770871]: [33.995400759s] [33.995400759s] END
Dec 09 09:31:57 pi4blue k3s[356]: E1209 09:31:57.578738     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.579668     356 trace.go:205] Trace[1901434431]: "Update" url:/apis/apps/v1/namespaces/monitoring/replicasets/grafana-7cccfc9b5f/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (09-Dec-2020 09:03:26.754) (total time: 34001ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[1901434431]: [34.001236524s] [34.001236524s] END
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.580581     356 trace.go:205] Trace[40803142]: "GuaranteedUpdate etcd3" type:*apps.Deployment (09-Dec-2020 09:03:26.768) (total time: 33987ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[40803142]: ---"Transaction prepared" 38ms (09:03:00.807)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[40803142]: ---"Transaction committed" 5048ms (09:31:00.679)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[40803142]: ---"Transaction committed" 5112ms (09:31:00.798)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[40803142]: ---"Transaction committed" 5144ms (09:31:00.946)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[40803142]: ---"Transaction committed" 5411ms (09:31:00.362)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[40803142]: ---"Transaction prepared" 28ms (09:31:00.391)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[40803142]: ---"Transaction committed" 5143ms (09:31:00.535)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[40803142]: ---"Transaction committed" 5028ms (09:31:00.567)
Dec 09 09:31:57 pi4blue k3s[356]: Trace[40803142]: [33.987984948s] [33.987984948s] END
Dec 09 09:31:57 pi4blue k3s[356]: E1209 09:31:57.580654     356 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
Dec 09 09:31:57 pi4blue k3s[356]: I1209 09:31:57.581482     356 trace.go:205] Trace[945252269]: "Update" url:/apis/apps/v1/namespaces/monitoring/deployments/prometheus-operator/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (09-Dec-2020 09:03:26.756) (total time: 34001ms):
Dec 09 09:31:57 pi4blue k3s[356]: Trace[945252269]: [34.00128939s] [34.00128939s] END
Dec 09 09:31:57 pi4blue k3s[356]: W1209 09:31:57.871392     356 garbagecollector.go:642] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
Dec 09 09:31:58 pi4blue k3s[356]: I1209 09:31:58.890813     356 trace.go:205] Trace[1761967746]: "Get" url:/apis/apps/v1/namespaces/monitoring/replicasets/grafana-7cccfc9b5f,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (09-Dec-2020 09:31:57.584) (total time: 1306ms):
Dec 09 09:31:58 pi4blue k3s[356]: Trace[1761967746]: ---"About to write a response" 1304ms (09:31:00.889)
Dec 09 09:31:58 pi4blue k3s[356]: Trace[1761967746]: [1.306285366s] [1.306285366s] END
Dec 09 09:31:58 pi4blue k3s[356]: I1209 09:31:58.897266     356 trace.go:205] Trace[1191139892]: "Get" url:/apis/apps/v1/namespaces/kube-system/replicasets/coredns-66c464876b,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (09-Dec-2020 09:31:57.376) (total time: 1520ms):
Dec 09 09:31:58 pi4blue k3s[356]: Trace[1191139892]: ---"About to write a response" 1519ms (09:31:00.896)
Dec 09 09:31:58 pi4blue k3s[356]: Trace[1191139892]: [1.520225685s] [1.520225685s] END
Dec 09 09:31:58 pi4blue k3s[356]: I1209 09:31:58.898720     356 trace.go:205] Trace[2086245023]: "List etcd3" key:/jobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (09-Dec-2020 09:31:57.788) (total time: 1109ms):
Dec 09 09:31:58 pi4blue k3s[356]: Trace[2086245023]: [1.109638547s] [1.109638547s] END
Dec 09 09:31:58 pi4blue k3s[356]: I1209 09:31:58.902165     356 trace.go:205] Trace[1745018372]: "List" url:/apis/batch/v1/jobs,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:cronjob-controller,client:127.0.0.1 (09-Dec-2020 09:31:57.788) (total time: 1113ms):
Dec 09 09:31:58 pi4blue k3s[356]: Trace[1745018372]: ---"Listing from storage done" 1110ms (09:31:00.899)
Dec 09 09:31:58 pi4blue k3s[356]: Trace[1745018372]: [1.113120413s] [1.113120413s] END
Dec 09 09:31:58 pi4blue k3s[356]: time="2020-12-09T09:31:58.934864927Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:59 pi4blue k3s[356]: time="2020-12-09T09:31:59.237281362Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:31:59 pi4blue k3s[356]: time="2020-12-09T09:31:59.447756779Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:59 pi4blue k3s[356]: time="2020-12-09T09:31:59.523680437Z" level=error msg="error in txn: context canceled"
Dec 09 09:31:59 pi4blue k3s[356]: time="2020-12-09T09:31:59.601539661Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:32:00 pi4blue k3s[356]: time="2020-12-09T09:32:00.180730315Z" level=error msg="error in txn: database is locked"
Dec 09 09:32:00 pi4blue k3s[356]: E1209 09:32:00.181841     356 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{Code:2, Message:"database is locked", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
Dec 09 09:32:00 pi4blue k3s[356]: I1209 09:32:00.183909     356 trace.go:205] Trace[1622025784]: "Create" url:/api/v1/namespaces/monitoring/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:31:55.138) (total time: 5044ms):
Dec 09 09:32:00 pi4blue k3s[356]: Trace[1622025784]: [5.044710061s] [5.044710061s] END
Dec 09 09:32:00 pi4blue k3s[356]: E1209 09:32:00.198104     356 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"arm-exporter.164f0278cba38b8d", GenerateName:"", Namespace:"monitoring", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Service", Namespace:"monitoring", Name:"arm-exporter", UID:"de528b85-214b-410d-8a5e-f6ebf7c2ee86", APIVersion:"v1", ResourceVersion:"1884", FieldPath:""}, Reason:"FailedToUpdateEndpointSlices", Message:"Error updating Endpoint Slices for Service monitoring/arm-exporter: failed to update arm-exporter-llbvn EndpointSlice for Service monitoring/arm-exporter: context deadline exceeded", Source:v1.EventSource{Component:"endpoint-slice-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfec4441717e998d, ext:98543142594, loc:(*time.Location)(0x57bdca8)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfec4441717e998d, ext:98543142594, loc:(*time.Location)(0x57bdca8)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = database is locked' (will not retry!)
Dec 09 09:32:00 pi4blue k3s[356]: time="2020-12-09T09:32:00.425221559Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:32:01 pi4blue k3s[356]: time="2020-12-09T09:32:01.425653290Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:32:01 pi4blue k3s[356]: I1209 09:32:01.456878     356 trace.go:205] Trace[1453297956]: "GuaranteedUpdate etcd3" type:*core.Endpoints (09-Dec-2020 09:31:27.456) (total time: 34000ms):
Dec 09 09:32:01 pi4blue k3s[356]: Trace[1453297956]: ---"Transaction committed" 5038ms (09:31:00.496)
Dec 09 09:32:01 pi4blue k3s[356]: Trace[1453297956]: ---"Transaction committed" 5249ms (09:31:00.747)
Dec 09 09:32:01 pi4blue k3s[356]: Trace[1453297956]: ---"Transaction committed" 5214ms (09:31:00.964)
Dec 09 09:32:01 pi4blue k3s[356]: Trace[1453297956]: ---"Transaction committed" 5187ms (09:31:00.153)
Dec 09 09:32:01 pi4blue k3s[356]: Trace[1453297956]: ---"Transaction committed" 5054ms (09:31:00.210)
Dec 09 09:32:01 pi4blue k3s[356]: Trace[1453297956]: ---"Transaction committed" 5020ms (09:31:00.232)
Dec 09 09:32:01 pi4blue k3s[356]: Trace[1453297956]: [34.000443975s] [34.000443975s] END
Dec 09 09:32:01 pi4blue k3s[356]: I1209 09:32:01.457787     356 trace.go:205] Trace[157549684]: "Update" url:/api/v1/namespaces/kube-system/endpoints/metrics-server,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:31:27.455) (total time: 34001ms):
Dec 09 09:32:01 pi4blue k3s[356]: Trace[157549684]: [34.001790501s] [34.001790501s] END
Dec 09 09:32:01 pi4blue k3s[356]: I1209 09:32:01.474428     356 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/metrics-server: Timeout: request did not complete within requested timeout 34s"
Dec 09 09:32:01 pi4blue k3s[356]: I1209 09:32:01.892400     356 trace.go:205] Trace[813050584]: "GuaranteedUpdate etcd3" type:*coordination.Lease (09-Dec-2020 09:31:51.906) (total time: 9985ms):
Dec 09 09:32:01 pi4blue k3s[356]: Trace[813050584]: ---"Transaction committed" 5222ms (09:31:00.129)
Dec 09 09:32:01 pi4blue k3s[356]: Trace[813050584]: [9.985990579s] [9.985990579s] END
Dec 09 09:32:01 pi4blue k3s[356]: E1209 09:32:01.894383     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:32:01 pi4blue k3s[356]: I1209 09:32:01.896719     356 trace.go:205] Trace[1986466837]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4red,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:31:51.905) (total time: 9990ms):
Dec 09 09:32:01 pi4blue k3s[356]: Trace[1986466837]: [9.990585919s] [9.990585919s] END
Dec 09 09:32:02 pi4blue k3s[356]: I1209 09:32:02.488140     356 trace.go:205] Trace[235989263]: "Get" url:/api/v1/namespaces/default,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:31:58.345) (total time: 4142ms):
Dec 09 09:32:02 pi4blue k3s[356]: Trace[235989263]: ---"About to write a response" 4142ms (09:32:00.487)
Dec 09 09:32:02 pi4blue k3s[356]: Trace[235989263]: [4.142579659s] [4.142579659s] END
Dec 09 09:32:02 pi4blue k3s[356]: I1209 09:32:02.565242     356 trace.go:205] Trace[1665176176]: "GuaranteedUpdate etcd3" type:*apps.Deployment (09-Dec-2020 09:31:57.527) (total time: 5037ms):
Dec 09 09:32:02 pi4blue k3s[356]: Trace[1665176176]: ---"Transaction committed" 5021ms (09:32:00.551)
Dec 09 09:32:02 pi4blue k3s[356]: Trace[1665176176]: [5.037552995s] [5.037552995s] END
Dec 09 09:32:02 pi4blue k3s[356]: I1209 09:32:02.565726     356 trace.go:205] Trace[1829832487]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/metrics-server/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (09-Dec-2020 09:31:57.527) (total time: 5038ms):
Dec 09 09:32:02 pi4blue k3s[356]: Trace[1829832487]: [5.038553546s] [5.038553546s] END
Dec 09 09:32:03 pi4blue k3s[356]: time="2020-12-09T09:32:03.282432515Z" level=error msg="error in txn: context canceled"
Dec 09 09:32:03 pi4blue k3s[356]: time="2020-12-09T09:32:03.902961916Z" level=error msg="error in txn: database is locked"
Dec 09 09:32:03 pi4blue k3s[356]: time="2020-12-09T09:32:03.903196207Z" level=error msg="error in txn: context canceled"
Dec 09 09:32:03 pi4blue k3s[356]: E1209 09:32:03.903579     356 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{Code:2, Message:"database is locked", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
Dec 09 09:32:03 pi4blue k3s[356]: I1209 09:32:03.904738     356 trace.go:205] Trace[1258254225]: "Create" url:/api/v1/namespaces/monitoring/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:31:55.227) (total time: 8676ms):
Dec 09 09:32:03 pi4blue k3s[356]: Trace[1258254225]: [8.676440669s] [8.676440669s] END
Dec 09 09:32:03 pi4blue k3s[356]: E1209 09:32:03.912192     356 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"prometheus-operator.164f0278e4582fc0", GenerateName:"", Namespace:"monitoring", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Endpoints", Namespace:"monitoring", Name:"prometheus-operator", UID:"41cf8eeb-7024-480d-a76e-7b4a2b3e5b83", APIVersion:"v1", ResourceVersion:"14240", FieldPath:""}, Reason:"FailedToUpdateEndpoint", Message:"Failed to update endpoint monitoring/prometheus-operator: context deadline exceeded", Source:v1.EventSource{Component:"endpoint-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfec44418e9873c0, ext:98957633626, loc:(*time.Location)(0x57bdca8)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfec44418e9873c0, ext:98957633626, loc:(*time.Location)(0x57bdca8)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = database is locked' (will not retry!)
Dec 09 09:32:05 pi4blue k3s[356]: I1209 09:32:05.105964     356 trace.go:205] Trace[1718029274]: "GuaranteedUpdate etcd3" type:*apps.StatefulSet (09-Dec-2020 09:31:50.121) (total time: 14984ms):
Dec 09 09:32:05 pi4blue k3s[356]: Trace[1718029274]: ---"Transaction committed" 6292ms (09:31:00.422)
Dec 09 09:32:05 pi4blue k3s[356]: Trace[1718029274]: ---"Transaction committed" 8670ms (09:32:00.104)
Dec 09 09:32:05 pi4blue k3s[356]: Trace[1718029274]: [14.984485962s] [14.984485962s] END
Dec 09 09:32:05 pi4blue k3s[356]: I1209 09:32:05.112824     356 trace.go:205] Trace[1391906868]: "Update" url:/apis/apps/v1/namespaces/monitoring/statefulsets/prometheus-k8s/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:statefulset-controller,client:127.0.0.1 (09-Dec-2020 09:31:50.119) (total time: 14991ms):
Dec 09 09:32:05 pi4blue k3s[356]: Trace[1391906868]: ---"Object stored in database" 14985ms (09:32:00.106)
Dec 09 09:32:05 pi4blue k3s[356]: Trace[1391906868]: [14.991614801s] [14.991614801s] END
Dec 09 09:32:05 pi4blue k3s[356]: I1209 09:32:05.147123     356 trace.go:205] Trace[1429333598]: "GuaranteedUpdate etcd3" type:*core.Endpoints (09-Dec-2020 09:31:50.198) (total time: 14948ms):
Dec 09 09:32:05 pi4blue k3s[356]: Trace[1429333598]: ---"Transaction committed" 5023ms (09:31:00.223)
Dec 09 09:32:05 pi4blue k3s[356]: Trace[1429333598]: ---"Transaction committed" 5057ms (09:32:00.283)
Dec 09 09:32:05 pi4blue k3s[356]: Trace[1429333598]: ---"Transaction committed" 4861ms (09:32:00.146)
Dec 09 09:32:05 pi4blue k3s[356]: Trace[1429333598]: [14.948486724s] [14.948486724s] END
Dec 09 09:32:05 pi4blue k3s[356]: I1209 09:32:05.147863     356 trace.go:205] Trace[704828029]: "Update" url:/api/v1/namespaces/monitoring/endpoints/kube-state-metrics,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:31:50.198) (total time: 14949ms):
Dec 09 09:32:05 pi4blue k3s[356]: Trace[704828029]: ---"Object stored in database" 14949ms (09:32:00.147)
Dec 09 09:32:05 pi4blue k3s[356]: Trace[704828029]: [14.949630401s] [14.949630401s] END
Dec 09 09:32:05 pi4blue k3s[356]: time="2020-12-09T09:32:05.224816220Z" level=error msg="error in txn: database is locked"
Dec 09 09:32:05 pi4blue k3s[356]: E1209 09:32:05.225688     356 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{Code:2, Message:"database is locked", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
Dec 09 09:32:05 pi4blue k3s[356]: I1209 09:32:05.226832     356 trace.go:205] Trace[260463645]: "Create" url:/api/v1/namespaces/monitoring/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:32:00.208) (total time: 5018ms):
Dec 09 09:32:05 pi4blue k3s[356]: Trace[260463645]: [5.018012364s] [5.018012364s] END
Dec 09 09:32:05 pi4blue k3s[356]: E1209 09:32:05.228772     356 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"node-exporter.164f0278cd28d3b8", GenerateName:"", Namespace:"monitoring", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Service", Namespace:"monitoring", Name:"node-exporter", UID:"b913a44c-fb0a-45a6-896b-bd256274d0fb", APIVersion:"v1", ResourceVersion:"2068", FieldPath:""}, Reason:"FailedToUpdateEndpointSlices", Message:"Error updating Endpoint Slices for Service monitoring/node-exporter: failed to update node-exporter-5lr68 EndpointSlice for Service monitoring/node-exporter: context deadline exceeded", Source:v1.EventSource{Component:"endpoint-slice-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfec44417303e1b8, ext:98568654998, loc:(*time.Location)(0x57bdca8)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfec44417303e1b8, ext:98568654998, loc:(*time.Location)(0x57bdca8)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = database is locked' (will not retry!)
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.030077     356 trace.go:205] Trace[54935474]: "GuaranteedUpdate etcd3" type:*core.Pod (09-Dec-2020 09:31:51.922) (total time: 14107ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[54935474]: ---"Transaction committed" 5189ms (09:31:00.121)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[54935474]: ---"Transaction committed" 5061ms (09:32:00.190)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[54935474]: ---"Transaction committed" 3821ms (09:32:00.029)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[54935474]: [14.107085658s] [14.107085658s] END
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.031832     356 trace.go:205] Trace[517607734]: "Patch" url:/api/v1/namespaces/monitoring/pods/node-exporter-hcwdr/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:31:51.922) (total time: 14109ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[517607734]: ---"About to apply patch" 5190ms (09:31:00.121)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[517607734]: ---"About to apply patch" 5062ms (09:32:00.190)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[517607734]: ---"Object stored in database" 3823ms (09:32:00.030)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[517607734]: [14.109162648s] [14.109162648s] END
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.039170     356 trace.go:205] Trace[1876942663]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (09-Dec-2020 09:31:43.936) (total time: 22102ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1876942663]: ---"Transaction committed" 6265ms (09:31:00.206)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1876942663]: ---"Transaction committed" 5028ms (09:31:00.238)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1876942663]: ---"Transaction committed" 5033ms (09:32:00.288)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1876942663]: ---"Transaction committed" 5746ms (09:32:00.038)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1876942663]: [22.102836698s] [22.102836698s] END
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.049897     356 trace.go:205] Trace[640491129]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/traefik-5dd496474/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (09-Dec-2020 09:31:43.932) (total time: 22117ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[640491129]: ---"Object stored in database" 22103ms (09:32:00.039)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[640491129]: [22.117596643s] [22.117596643s] END
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.517299     356 trace.go:205] Trace[1178585935]: "GuaranteedUpdate etcd3" type:*apiregistration.APIService (09-Dec-2020 09:31:53.227) (total time: 13289ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1178585935]: ---"Transaction committed" 5015ms (09:31:00.243)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1178585935]: ---"Transaction committed" 5041ms (09:32:00.286)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1178585935]: ---"Transaction committed" 3229ms (09:32:00.516)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1178585935]: [13.2899665s] [13.2899665s] END
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.520441     356 trace.go:205] Trace[1537846416]: "Update" url:/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:31:53.226) (total time: 13293ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1537846416]: ---"Object stored in database" 13290ms (09:32:00.517)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1537846416]: [13.293669453s] [13.293669453s] END
Dec 09 09:32:06 pi4blue k3s[356]: E1209 09:32:06.523742     356 available_controller.go:490] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.190.210:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.190.210:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.190.210:443: connect: no route to host
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.606275     356 trace.go:205] Trace[548838029]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (09-Dec-2020 09:32:02.586) (total time: 4020ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[548838029]: ---"Transaction committed" 4016ms (09:32:00.606)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[548838029]: [4.020152774s] [4.020152774s] END
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.656925     356 trace.go:205] Trace[1657774340]: "GuaranteedUpdate etcd3" type:*coordination.Lease (09-Dec-2020 09:31:57.133) (total time: 9522ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1657774340]: ---"Transaction committed" 5054ms (09:32:00.190)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1657774340]: [9.522866142s] [9.522866142s] END
Dec 09 09:32:06 pi4blue k3s[356]: E1209 09:32:06.657019     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:32:06 pi4blue k3s[356]: E1209 09:32:06.656918     356 controller.go:178] failed to update node lease, error: Put "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4blue?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.659222     356 trace.go:205] Trace[1208574385]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4blue,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:31:57.120) (total time: 9538ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1208574385]: [9.538316497s] [9.538316497s] END
Dec 09 09:32:06 pi4blue k3s[356]: E1209 09:32:06.748649     356 event.go:273] Unable to write event: 'Patch "https://127.0.0.1:6443/api/v1/namespaces/monitoring/events/prometheus-adapter-585b57857b-s4t9n.164f00e171619609": read tcp 127.0.0.1:40896->127.0.0.1:6443: use of closed network connection' (may retry after sleeping)
Dec 09 09:32:06 pi4blue k3s[356]: W1209 09:32:06.754227     356 status_manager.go:566] Failed to update status for pod "local-path-provisioner-7ff9579c6-q96pm_kube-system(1e7d62e4-a24f-415b-85ae-0e89586e3fcc)": failed to patch status "{\"metadata\":{\"uid\":\"1e7d62e4-a24f-415b-85ae-0e89586e3fcc\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2020-12-09T09:31:58Z\",\"message\":null,\"reason\":null,\"status\":\"True\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2020-12-09T09:31:58Z\",\"message\":null,\"reason\":null,\"status\":\"True\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"containerd://7ba30d4ff4de9810ca9bd5ff4e8a0667e568e67aa038ee32caff856f7ad6ef47\",\"image\":\"docker.io/rancher/local-path-provisioner:v0.0.14\",\"imageID\":\"docker.io/rancher/local-path-provisioner@sha256:40cb8c984c1759f1860eee088035040f47051c959a6d07cdb126e132c6f43b45\",\"lastState\":{\"terminated\":{\"containerID\":\"containerd://f7b46bc566368b3b160255eadf30bddbf1b03f3c316a34984472eeb0e0876872\",\"exitCode\":255,\"finishedAt\":\"2020-12-09T09:02:47Z\",\"reason\":\"Unknown\",\"startedAt\":\"2020-12-09T08:44:15Z\"}},\"name\":\"local-path-provisioner\",\"ready\":true,\"restartCount\":6,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2020-12-09T09:31:53Z\"}}}],\"podIP\":\"10.42.0.56\",\"podIPs\":[{\"ip\":\"10.42.0.56\"}]}}" for pod "kube-system"/"local-path-provisioner-7ff9579c6-q96pm": Patch "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/local-path-provisioner-7ff9579c6-q96pm/status": read tcp 127.0.0.1:40884->127.0.0.1:6443: use of closed network connection
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.799491     356 trace.go:205] Trace[1027229089]: "GuaranteedUpdate etcd3" type:*core.Event (09-Dec-2020 09:32:05.231) (total time: 1568ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1027229089]: [1.568217098s] [1.568217098s] END
Dec 09 09:32:06 pi4blue k3s[356]: E1209 09:32:06.799563     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.800395     356 trace.go:205] Trace[84745642]: "Patch" url:/api/v1/namespaces/monitoring/events/prometheus-adapter-585b57857b-s4t9n.164f00e171619609,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:32:05.231) (total time: 1569ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[84745642]: [1.569320906s] [1.569320906s] END
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.860099     356 trace.go:205] Trace[1041413565]: "GuaranteedUpdate etcd3" type:*core.Pod (09-Dec-2020 09:31:59.060) (total time: 7799ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1041413565]: ---"Transaction committed" 5015ms (09:32:00.083)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[1041413565]: [7.799523292s] [7.799523292s] END
Dec 09 09:32:06 pi4blue k3s[356]: E1209 09:32:06.861975     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:32:06 pi4blue k3s[356]: I1209 09:32:06.864323     356 trace.go:205] Trace[717208895]: "Patch" url:/api/v1/namespaces/kube-system/pods/local-path-provisioner-7ff9579c6-q96pm/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:31:59.059) (total time: 7804ms):
Dec 09 09:32:06 pi4blue k3s[356]: Trace[717208895]: ---"About to apply patch" 5016ms (09:32:00.084)
Dec 09 09:32:06 pi4blue k3s[356]: Trace[717208895]: [7.804894467s] [7.804894467s] END
Dec 09 09:32:07 pi4blue k3s[356]: time="2020-12-09T09:32:07.296396797Z" level=error msg="error in txn: context canceled"
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.321802     356 trace.go:205] Trace[670337568]: "GuaranteedUpdate etcd3" type:*core.Endpoints (09-Dec-2020 09:32:01.484) (total time: 5836ms):
Dec 09 09:32:07 pi4blue k3s[356]: Trace[670337568]: ---"Transaction committed" 5833ms (09:32:00.321)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[670337568]: [5.83677707s] [5.83677707s] END
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.322052     356 trace.go:205] Trace[1929025112]: "Update" url:/api/v1/namespaces/monitoring/endpoints/prometheus-operated,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:32:01.483) (total time: 5837ms):
Dec 09 09:32:07 pi4blue k3s[356]: Trace[1929025112]: ---"Object stored in database" 5837ms (09:32:00.321)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[1929025112]: [5.83799695s] [5.83799695s] END
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.331109     356 trace.go:205] Trace[217661051]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (09-Dec-2020 09:31:49.827) (total time: 17503ms):
Dec 09 09:32:07 pi4blue k3s[356]: Trace[217661051]: ---"Transaction committed" 5267ms (09:31:00.098)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[217661051]: ---"Transaction committed" 5039ms (09:32:00.141)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[217661051]: ---"Transaction committed" 7185ms (09:32:00.330)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[217661051]: [17.503196752s] [17.503196752s] END
Dec 09 09:32:07 pi4blue k3s[356]: W1209 09:32:07.331772     356 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/poda01d2cc3-caac-4863-b936-63352516cf0b/9d8f4d7e5625425ff59b98fb817d3abd3aafd722a3f38aed277ed22d1c9d159f\""
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.332177     356 trace.go:205] Trace[1010976838]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/monitoring/endpointslices/kube-state-metrics-vfqdp,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:31:49.826) (total time: 17504ms):
Dec 09 09:32:07 pi4blue k3s[356]: Trace[1010976838]: ---"Object stored in database" 17503ms (09:32:00.331)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[1010976838]: [17.504825363s] [17.504825363s] END
Dec 09 09:32:07 pi4blue k3s[356]: time="2020-12-09T09:32:07.374755069Z" level=error msg="error in txn: context canceled"
Dec 09 09:32:07 pi4blue k3s[356]: W1209 09:32:07.382990     356 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod1ffdff58-431c-4926-a3f9-a6c0791ce502/ae11512dc22cc0c9f08982c1d9dbcf502a9dd41b9afc20073107c600ee830c28\""
Dec 09 09:32:07 pi4blue k3s[356]: W1209 09:32:07.524993     356 handler_proxy.go:102] no RequestInfo found in the context
Dec 09 09:32:07 pi4blue k3s[356]: E1209 09:32:07.525263     356 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Dec 09 09:32:07 pi4blue k3s[356]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.525311     356 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.593290     356 trace.go:205] Trace[103155718]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (09-Dec-2020 09:31:49.860) (total time: 17732ms):
Dec 09 09:32:07 pi4blue k3s[356]: Trace[103155718]: ---"Transaction committed" 5247ms (09:31:00.111)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[103155718]: ---"Transaction committed" 5040ms (09:32:00.154)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[103155718]: ---"Transaction committed" 5025ms (09:32:00.183)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[103155718]: ---"Transaction committed" 2406ms (09:32:00.592)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[103155718]: [17.732596719s] [17.732596719s] END
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.593895     356 trace.go:205] Trace[947400089]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/kube-dns-2q27b,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:31:49.859) (total time: 17733ms):
Dec 09 09:32:07 pi4blue k3s[356]: Trace[947400089]: ---"Object stored in database" 17733ms (09:32:00.593)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[947400089]: [17.733843635s] [17.733843635s] END
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.687699     356 trace.go:205] Trace[942359644]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (09-Dec-2020 09:31:33.685) (total time: 34001ms):
Dec 09 09:32:07 pi4blue k3s[356]: Trace[942359644]: ---"Transaction committed" 5059ms (09:31:00.747)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[942359644]: ---"Transaction committed" 5439ms (09:31:00.188)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[942359644]: ---"Transaction committed" 5210ms (09:31:00.401)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[942359644]: ---"Transaction committed" 5016ms (09:31:00.420)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[942359644]: ---"Transaction committed" 5017ms (09:31:00.441)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[942359644]: ---"Transaction committed" 5020ms (09:32:00.463)
Dec 09 09:32:07 pi4blue k3s[356]: Trace[942359644]: [34.001382774s] [34.001382774s] END
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.689047     356 trace.go:205] Trace[1576292864]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/kube-dns-prometheus-discovery-pkn7f,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (09-Dec-2020 09:31:33.685) (total time: 34003ms):
Dec 09 09:32:07 pi4blue k3s[356]: Trace[1576292864]: [34.003397306s] [34.003397306s] END
Dec 09 09:32:07 pi4blue k3s[356]: W1209 09:32:07.692150     356 endpointslice_controller.go:284] Error syncing endpoint slices for service "kube-system/kube-dns-prometheus-discovery", retrying. Error: failed to update kube-dns-prometheus-discovery-pkn7f EndpointSlice for Service kube-system/kube-dns-prometheus-discovery: Timeout: request did not complete within requested timeout 34s
Dec 09 09:32:07 pi4blue k3s[356]: I1209 09:32:07.694548     356 event.go:291] "Event occurred" object="kube-system/kube-dns-prometheus-discovery" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kube-system/kube-dns-prometheus-discovery: failed to update kube-dns-prometheus-discovery-pkn7f EndpointSlice for Service kube-system/kube-dns-prometheus-discovery: Timeout: request did not complete within requested timeout 34s"
Dec 09 09:32:08 pi4blue k3s[356]: I1209 09:32:08.899625     356 trace.go:205] Trace[1824253354]: "Get" url:/api/v1/namespaces/default,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:127.0.0.1 (09-Dec-2020 09:32:08.359) (total time: 540ms):
Dec 09 09:32:08 pi4blue k3s[356]: Trace[1824253354]: ---"About to write a response" 540ms (09:32:00.899)
Dec 09 09:32:08 pi4blue k3s[356]: Trace[1824253354]: [540.433926ms] [540.433926ms] END
Dec 09 09:32:09 pi4blue k3s[356]: time="2020-12-09T09:32:09.059463266Z" level=error msg="error in txn: database is locked"
Dec 09 09:32:09 pi4blue k3s[356]: E1209 09:32:09.064211     356 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{Code:2, Message:"database is locked", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
Dec 09 09:32:09 pi4blue k3s[356]: I1209 09:32:09.066209     356 trace.go:205] Trace[147921242]: "Create" url:/api/v1/namespaces/monitoring/events,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:endpoint-controller,client:127.0.0.1 (09-Dec-2020 09:32:03.913) (total time: 5152ms):
Dec 09 09:32:09 pi4blue k3s[356]: Trace[147921242]: [5.152120726s] [5.152120726s] END
Dec 09 09:32:09 pi4blue k3s[356]: W1209 09:32:09.131559     356 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/podd0863078-94a1-40fb-9f50-44170f25dd25/256472532f0e85d5534d901160e71d9c14c5321fdfb900c655f73ae60692e2a6 WatchSource:0}: task 256472532f0e85d5534d901160e71d9c14c5321fdfb900c655f73ae60692e2a6 not found: not found
Dec 09 09:32:09 pi4blue k3s[356]: E1209 09:32:09.137444     356 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"alertmanager-main.164f027a7ee0b84a", GenerateName:"", Namespace:"monitoring", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Endpoints", Namespace:"monitoring", Name:"alertmanager-main", UID:"80d5bdf9-11f9-4cb3-9ca1-750665ceefb2", APIVersion:"v1", ResourceVersion:"14173", FieldPath:""}, Reason:"FailedToUpdateEndpoint", Message:"Failed to update endpoint monitoring/alertmanager-main: Timeout: request did not complete within requested timeout 34s", Source:v1.EventSource{Component:"endpoint-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfec444347e5764a, ext:105845240054, loc:(*time.Location)(0x57bdca8)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfec444347e5764a, ext:105845240054, loc:(*time.Location)(0x57bdca8)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = database is locked' (will not retry!)
Dec 09 09:32:09 pi4blue k3s[356]: time="2020-12-09T09:32:09.193105703Z" level=error msg="error in txn: context canceled"
Dec 09 09:32:09 pi4blue k3s[356]: E1209 09:32:09.630233     356 available_controller.go:490] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.190.210:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.190.210:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.190.210:443: connect: no route to host
Dec 09 09:32:09 pi4blue k3s[356]: time="2020-12-09T09:32:09.644891515Z" level=error msg="error in txn: context deadline exceeded"
Dec 09 09:32:10 pi4blue k3s[356]: W1209 09:32:10.664457     356 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod072a9c2f-9d13-47c1-8d77-d2fe3544912c/c31f7b8c5350ff9399133a78a17876706022e02b0b42c605239282bc30c99813 WatchSource:0}: task c31f7b8c5350ff9399133a78a17876706022e02b0b42c605239282bc30c99813 not found: not found
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.583449     356 trace.go:205] Trace[642711330]: "GuaranteedUpdate etcd3" type:*apps.DaemonSet (09-Dec-2020 09:31:50.120) (total time: 21462ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[642711330]: ---"Transaction committed" 5021ms (09:31:00.146)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[642711330]: ---"Transaction committed" 8779ms (09:32:00.933)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[642711330]: ---"Transaction committed" 7646ms (09:32:00.582)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[642711330]: [21.462733791s] [21.462733791s] END
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.586229     356 trace.go:205] Trace[227435004]: "Update" url:/apis/apps/v1/namespaces/monitoring/daemonsets/arm-exporter/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:daemon-set-controller,client:127.0.0.1 (09-Dec-2020 09:31:50.119) (total time: 21466ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[227435004]: ---"Object stored in database" 21463ms (09:32:00.583)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[227435004]: [21.466207083s] [21.466207083s] END
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.689166     356 trace.go:205] Trace[53979361]: "List etcd3" key:/jobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (09-Dec-2020 09:32:08.969) (total time: 2719ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[53979361]: [2.719917662s] [2.719917662s] END
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.690855     356 trace.go:205] Trace[218643880]: "List" url:/apis/batch/v1/jobs,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:cronjob-controller,client:127.0.0.1 (09-Dec-2020 09:32:08.968) (total time: 2721ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[218643880]: ---"Listing from storage done" 2720ms (09:32:00.689)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[218643880]: [2.72174714s] [2.72174714s] END
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.914857     356 trace.go:205] Trace[1046721813]: "GuaranteedUpdate etcd3" type:*coordination.Lease (09-Dec-2020 09:32:01.987) (total time: 9926ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[1046721813]: ---"Transaction committed" 7012ms (09:32:00.003)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[1046721813]: [9.926828442s] [9.926828442s] END
Dec 09 09:32:11 pi4blue k3s[356]: E1209 09:32:11.916695     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.923735     356 trace.go:205] Trace[1384475009]: "GuaranteedUpdate etcd3" type:*core.Event (09-Dec-2020 09:32:01.909) (total time: 10014ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[1384475009]: ---"initial value restored" 1937ms (09:32:00.846)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[1384475009]: ---"Transaction committed" 5148ms (09:32:00.996)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[1384475009]: [10.014477892s] [10.014477892s] END
Dec 09 09:32:11 pi4blue k3s[356]: E1209 09:32:11.924107     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.925903     356 trace.go:205] Trace[1848310168]: "Patch" url:/api/v1/namespaces/monitoring/events/arm-exporter-k5hxx.164f00df0e1f9093,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:32:01.908) (total time: 10016ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[1848310168]: ---"About to apply patch" 1937ms (09:32:00.846)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[1848310168]: ---"About to apply patch" 5149ms (09:32:00.996)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[1848310168]: [10.016907652s] [10.016907652s] END
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.933496     356 trace.go:205] Trace[397945365]: "GuaranteedUpdate etcd3" type:*core.Pod (09-Dec-2020 09:32:06.136) (total time: 5796ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[397945365]: ---"Transaction committed" 5183ms (09:32:00.329)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[397945365]: [5.796471646s] [5.796471646s] END
Dec 09 09:32:11 pi4blue k3s[356]: E1209 09:32:11.933770     356 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.935041     356 trace.go:205] Trace[136183530]: "Patch" url:/api/v1/namespaces/monitoring/pods/alertmanager-main-0/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:32:06.136) (total time: 5798ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[136183530]: ---"About to apply patch" 5184ms (09:32:00.330)
Dec 09 09:32:11 pi4blue k3s[356]: Trace[136183530]: [5.798441102s] [5.798441102s] END
Dec 09 09:32:11 pi4blue k3s[356]: I1209 09:32:11.974319     356 trace.go:205] Trace[856003423]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pi4red,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10,client:192.168.1.143 (09-Dec-2020 09:32:01.986) (total time: 9986ms):
Dec 09 09:32:11 pi4blue k3s[356]: Trace[856003423]: [9.986861388s] [9.986861388s] END
Dec 09 09:32:12 pi4blue k3s[356]: I1209 09:32:12.228398     356 trace.go:205] Trace[1176465743]: "GuaranteedUpdate etcd3" type:*apps.DaemonSet (09-Dec-2020 09:31:50.107) (total time: 22120ms):
Dec 09 09:32:12 pi4blue k3s[356]: Trace[1176465743]: ---"Transaction committed" 5026ms (09:31:00.137)
Dec 09 09:32:12 pi4blue k3s[356]: Trace[1176465743]: ---"Transaction committed" 5059ms (09:32:00.211)
Dec 09 09:32:12 pi4blue k3s[356]: Trace[1176465743]: ---"Transaction committed" 5024ms (09:32:00.244)
Dec 09 09:32:12 pi4blue k3s[356]: Trace[1176465743]: ---"Transaction committed" 6980ms (09:32:00.227)
Dec 09 09:32:12 pi4blue k3s[356]: Trace[1176465743]: [22.120938557s] [22.120938557s] END
Dec 09 09:32:12 pi4blue k3s[356]: I1209 09:32:12.237237     356 trace.go:205] Trace[218045082]: "Update" url:/apis/apps/v1/namespaces/kube-system/daemonsets/svclb-traefik/status,user-agent:k3s/v1.19.4+k3s1 (linux/arm) kubernetes/2532c10/system:serviceaccount:kube-system:daemon-set-controller,client:127.0.0.1 (09-Dec-2020 09:31:50.106) (total time: 22130ms):
Dec 09 09:32:12 pi4blue k3s[356]: Trace[218045082]: ---"Object stored in database" 22122ms (09:32:00.229)
Dec 09 09:32:12 pi4blue k3s[356]: Trace[218045082]: [22.130852038s] [22.130852038s] END


Worker

-- Logs begin at Thu 2019-02-14 10:11:59 UTC, end at Wed 2020-12-09 09:24:52 UTC. --
Dec 09 09:01:49 pi4red systemd[1]: Starting Lightweight Kubernetes...
Dec 09 09:01:49 pi4red systemd[1]: Started Lightweight Kubernetes.
Dec 09 09:02:22 pi4red k3s[376]: time="2020-12-09T09:02:22.158791315Z" level=info msg="Starting k3s agent v1.19.4+k3s1 (2532c10f)"
Dec 09 09:02:22 pi4red k3s[376]: time="2020-12-09T09:02:22.163746659Z" level=info msg="Module overlay was already loaded"
Dec 09 09:02:22 pi4red k3s[376]: time="2020-12-09T09:02:22.164193744Z" level=info msg="Module nf_conntrack was already loaded"
Dec 09 09:02:22 pi4red k3s[376]: time="2020-12-09T09:02:22.164485628Z" level=info msg="Module br_netfilter was already loaded"
Dec 09 09:02:22 pi4red k3s[376]: time="2020-12-09T09:02:22.179363901Z" level=info msg="Running load balancer 127.0.0.1:40711 -> [192.168.1.142:6443]"
Dec 09 09:02:22 pi4red k3s[376]: time="2020-12-09T09:02:22.770175095Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Dec 09 09:02:22 pi4red k3s[376]: time="2020-12-09T09:02:22.771008100Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Dec 09 09:02:23 pi4red k3s[376]: time="2020-12-09T09:02:23.778315697Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
Dec 09 09:02:24 pi4red k3s[376]: time="2020-12-09T09:02:24.801177020Z" level=info msg="Containerd is now running"
Dec 09 09:02:24 pi4red k3s[376]: time="2020-12-09T09:02:24.881426029Z" level=info msg="Connecting to proxy" url="wss://192.168.1.142:6443/v1-k3s/connect"
Dec 09 09:02:24 pi4red k3s[376]: time="2020-12-09T09:02:24.922437482Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
Dec 09 09:02:24 pi4red k3s[376]: time="2020-12-09T09:02:24.922719792Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/b46300d70fe21c458e9a951f12a5c6dd86eb7cf2d0b213bb9ad07dbad435207e/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=pi4red --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/system.slice --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/system.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Dec 09 09:02:24 pi4red k3s[376]: time="2020-12-09T09:02:24.932401651Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=pi4red --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Dec 09 09:02:24 pi4red k3s[376]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Dec 09 09:02:24 pi4red k3s[376]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Dec 09 09:02:24 pi4red k3s[376]: W1209 09:02:24.939257     376 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.000627     376 server.go:407] Version: v1.19.4+k3s1
Dec 09 09:02:25 pi4red k3s[376]: time="2020-12-09T09:02:25.048670917Z" level=info msg="Node CIDR assigned for: pi4red"
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.048890     376 flannel.go:92] Determining IP address of default interface
Dec 09 09:02:25 pi4red k3s[376]: E1209 09:02:25.053559     376 node.go:125] Failed to retrieve node info: nodes "pi4red" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.061631     376 flannel.go:105] Using interface with name eth0 and address 192.168.1.143
Dec 09 09:02:25 pi4red k3s[376]: time="2020-12-09T09:02:25.071517873Z" level=info msg="labels have already set on node: pi4red"
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.077638     376 kube.go:300] Starting kube subnet manager
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.079826     376 kube.go:117] Waiting 10m0s for node controller to sync
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.218601     376 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.455983     376 nvidia.go:61] NVIDIA GPU metrics will not be available: no NVIDIA devices found
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.458208     376 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.458772     376 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.458855     376 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.458937     376 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.459012     376 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.459521     376 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.459588     376 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.459634     376 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.459680     376 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Dec 09 09:02:25 pi4red k3s[376]: E1209 09:02:25.459700     376 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.460033     376 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.460093     376 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.460136     376 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.460178     376 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Dec 09 09:02:25 pi4red k3s[376]: E1209 09:02:25.460198     376 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.524604     376 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.525760     376 container_manager_linux.go:289] container manager verified user specified cgroup-root exists: []
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.526395     376 container_manager_linux.go:294] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.530796     376 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.531239     376 container_manager_linux.go:324] [topologymanager] Initializing Topology Manager with none policy
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.531550     376 container_manager_linux.go:329] Creating device plugin manager: true
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.532596     376 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.536037     376 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.536740     376 kubelet.go:261] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.537242     376 kubelet.go:273] Watching apiserver
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.565912     376 kuberuntime_manager.go:214] Container runtime containerd initialized, version: v1.4.1-k3s1, apiVersion: v1alpha2
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.581705     376 server.go:1148] Started kubelet
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.583159     376 server.go:152] Starting to listen on 0.0.0.0:10250
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.595211     376 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Dec 09 09:02:25 pi4red k3s[376]: E1209 09:02:25.603549     376 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Dec 09 09:02:25 pi4red k3s[376]: E1209 09:02:25.603671     376 kubelet.go:1218] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.607478     376 volume_manager.go:265] Starting Kubelet Volume Manager
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.611057     376 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5e60db94ec19b77d47695c269828971684fe455ab22b074ce6bd120dc81f232
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.611524     376 server.go:424] Adding debug handlers to kubelet server.
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.635273     376 desired_state_of_world_populator.go:139] Desired state populator starts to run
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.678205     376 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ca6b8aaf1329e4ef31c3c0e1688af17dd657ea23f192f48ab938931d5500456a
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.712580     376 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6d489b3f0b69c481362010fec7322fcff3f9498d51c7ec6c36daf696629550c4
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.715874     376 kuberuntime_manager.go:992] updating runtime config through cri with podcidr 10.42.2.0/24
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.727145     376 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.2.0/24
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.751999     376 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 68096add5d733e14dc4d5d114b47b16bcd2e46d7d5a75aa4b35ffab6daf04ac3
Dec 09 09:02:25 pi4red k3s[376]: W1209 09:02:25.762048     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/\""
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.786176     376 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 10234175495d0b5900862e490e42ec1a455938e2aa654796baebe4223325af4b
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.813367     376 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 619093af9b220828ea5e994367b277ce4e76303e1dba676bf292cbfa1a77e6f0
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.842214     376 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 677502971a28e6608be694edb52d47ed9e52d21a9982edfe003228f3d5d974d8
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.844436     376 kubelet_node_status.go:70] Attempting to register node pi4red
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.866471     376 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 249d9025e4c0b1f61b46dff67e9e7224e89d03a60178ba0c11207f97018eef4c
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.995429     376 cpu_manager.go:184] [cpumanager] starting with none policy
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.995489     376 cpu_manager.go:185] [cpumanager] reconciling every 10s
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.995552     376 state_mem.go:36] [cpumanager] initializing new in-memory state store
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.999182     376 state_mem.go:88] [cpumanager] updated default cpuset: ""
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.999249     376 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Dec 09 09:02:25 pi4red k3s[376]: I1209 09:02:25.999291     376 policy_none.go:43] [cpumanager] none policy: Start
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.004192     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods\""
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.007291     376 status_manager.go:158] Starting to sync pod status with apiserver
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.007407     376 kubelet.go:1741] Starting kubelet main sync loop.
Dec 09 09:02:26 pi4red k3s[376]: E1209 09:02:26.007738     376 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.011961     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable\""
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.015984     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort\""
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.020509     376 plugin_manager.go:114] Starting Kubelet Plugin Manager
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.027104     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/systemd/system.slice\""
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.080184     376 kube.go:124] Node controller sync successful
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.080386     376 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Dec 09 09:02:26 pi4red k3s[376]: E1209 09:02:26.101110     376 node.go:125] Failed to retrieve node info: nodes "pi4red" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.108260     376 topology_manager.go:233] [topologymanager] Topology Admit Handler
Dec 09 09:02:26 pi4red k3s[376]: E1209 09:02:26.169727     376 reflector.go:127] object-"kube-system"/"default-token-v9pqp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-v9pqp" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'pi4red' and this object
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.208223     376 topology_manager.go:233] [topologymanager] Topology Admit Handler
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.212536     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod16010812-bd67-472e-a354-d88a3c531c70\""
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.237417     376 topology_manager.go:233] [topologymanager] Topology Admit Handler
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.243638     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998\""
Dec 09 09:02:26 pi4red k3s[376]: E1209 09:02:26.245083     376 cgroup_manager_linux.go:698] cgroup update failed failed to set supported cgroup subsystems for cgroup [kubepods burstable podbd1e51a9-a315-4d53-8a45-a86442bf0998]: failed to set config for supported subsystems : failed to write "100000" to "/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998/cpu.cfs_period_us": open /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998/cpu.cfs_period_us: permission denied
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.246567     376 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-v9pqp" (UniqueName: "kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp") pod "svclb-traefik-qql2l" (UID: "16010812-bd67-472e-a354-d88a3c531c70")
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.247352     376 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "arm-exporter-token-9txj8" (UniqueName: "kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8") pod "arm-exporter-k5hxx" (UID: "bd1e51a9-a315-4d53-8a45-a86442bf0998")
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.249100     376 topology_manager.go:233] [topologymanager] Topology Admit Handler
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.254028     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34\""
Dec 09 09:02:26 pi4red k3s[376]: E1209 09:02:26.256716     376 cgroup_manager_linux.go:698] cgroup update failed failed to set supported cgroup subsystems for cgroup [kubepods burstable pod726c0300-292b-4b66-81a2-c370e0360f34]: failed to set config for supported subsystems : failed to write "100000" to "/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/cpu.cfs_period_us": open /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/cpu.cfs_period_us: permission denied
Dec 09 09:02:26 pi4red k3s[376]: E1209 09:02:26.259896     376 reflector.go:127] object-"monitoring"/"arm-exporter-token-9txj8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "arm-exporter-token-9txj8" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.261319     376 pod_container_deletor.go:79] Container "641c199f30e9c5079863281919a5efd37b6a657069b745edcd475d9cc0043484" not found in pod's containers
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.261445     376 pod_container_deletor.go:79] Container "75b52cf84c33a54fbbcd16662f9163699d59aecd6481ad4b57ea4b88a938d2d5" not found in pod's containers
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.261798     376 pod_container_deletor.go:79] Container "ab140a0db5d3173bed8a8d4063aa3ec56d1469292f5666d2dfc14b03d4e9351a" not found in pod's containers
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.261935     376 pod_container_deletor.go:79] Container "db25cf3726f3970dd61b50ff4f2d1777d74c0001c1fd61a4840be58c7dc4b271" not found in pod's containers
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.262274     376 pod_container_deletor.go:79] Container "fa3c62d1e88776a90472346f4ebdaadc19234ba8049e3fc25cc54cc29746335a" not found in pod's containers
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.262427     376 pod_container_deletor.go:79] Container "fc781888ca905db13f0560a6071cc20a24f9ac64a3de9778c2b12d287eed71ad" not found in pod's containers
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.262775     376 pod_container_deletor.go:79] Container "58ad5de99ac80b643ba247b8bc1397301d222c4faa6c49cc43d3aa6bb8829670" not found in pod's containers
Dec 09 09:02:26 pi4red k3s[376]: W1209 09:02:26.264812     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod0313bc4a-38b9-42cd-9a95-95ce0698787e\""
Dec 09 09:02:26 pi4red k3s[376]: E1209 09:02:26.288187     376 reflector.go:127] object-"monitoring"/"node-exporter-token-svqfn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-exporter-token-svqfn" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:26 pi4red k3s[376]: E1209 09:02:26.326713     376 reflector.go:127] object-"monitoring"/"alertmanager-main": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "alertmanager-main" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.347620     376 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "proc" (UniqueName: "kubernetes.io/host-path/726c0300-292b-4b66-81a2-c370e0360f34-proc") pod "node-exporter-hcwdr" (UID: "726c0300-292b-4b66-81a2-c370e0360f34")
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.347742     376 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "alertmanager-main-db" (UniqueName: "kubernetes.io/empty-dir/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-db") pod "alertmanager-main-0" (UID: "0313bc4a-38b9-42cd-9a95-95ce0698787e")
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.347851     376 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "alertmanager-main-token-z4b7q" (UniqueName: "kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q") pod "alertmanager-main-0" (UID: "0313bc4a-38b9-42cd-9a95-95ce0698787e")
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.347916     376 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "root" (UniqueName: "kubernetes.io/host-path/726c0300-292b-4b66-81a2-c370e0360f34-root") pod "node-exporter-hcwdr" (UID: "726c0300-292b-4b66-81a2-c370e0360f34")
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.347992     376 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "node-exporter-token-svqfn" (UniqueName: "kubernetes.io/secret/726c0300-292b-4b66-81a2-c370e0360f34-node-exporter-token-svqfn") pod "node-exporter-hcwdr" (UID: "726c0300-292b-4b66-81a2-c370e0360f34")
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.348099     376 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume") pod "alertmanager-main-0" (UID: "0313bc4a-38b9-42cd-9a95-95ce0698787e")
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.348398     376 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sys" (UniqueName: "kubernetes.io/host-path/726c0300-292b-4b66-81a2-c370e0360f34-sys") pod "node-exporter-hcwdr" (UID: "726c0300-292b-4b66-81a2-c370e0360f34")
Dec 09 09:02:26 pi4red k3s[376]: I1209 09:02:26.348484     376 reconciler.go:157] Reconciler: start to sync state
Dec 09 09:02:26 pi4red k3s[376]: E1209 09:02:26.384313     376 reflector.go:127] object-"monitoring"/"alertmanager-main-token-z4b7q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "alertmanager-main-token-z4b7q" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.348765     376 secret.go:195] Couldn't get secret kube-system/default-token-v9pqp: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.348911     376 secret.go:195] Couldn't get secret monitoring/arm-exporter-token-9txj8: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.349439     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp podName:16010812-bd67-472e-a354-d88a3c531c70 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:27.849170487 +0000 UTC m=+36.443034357 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-v9pqp\" (UniqueName: \"kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp\") pod \"svclb-traefik-qql2l\" (UID: \"16010812-bd67-472e-a354-d88a3c531c70\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.349696     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8 podName:bd1e51a9-a315-4d53-8a45-a86442bf0998 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:27.849560962 +0000 UTC m=+36.443424684 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"arm-exporter-token-9txj8\" (UniqueName: \"kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8\") pod \"arm-exporter-k5hxx\" (UID: \"bd1e51a9-a315-4d53-8a45-a86442bf0998\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.463931     376 secret.go:195] Couldn't get secret monitoring/node-exporter-token-svqfn: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.464097     376 secret.go:195] Couldn't get secret monitoring/alertmanager-main: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.464358     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/726c0300-292b-4b66-81a2-c370e0360f34-node-exporter-token-svqfn podName:726c0300-292b-4b66-81a2-c370e0360f34 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:27.964184829 +0000 UTC m=+36.558048644 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"node-exporter-token-svqfn\" (UniqueName: \"kubernetes.io/secret/726c0300-292b-4b66-81a2-c370e0360f34-node-exporter-token-svqfn\") pod \"node-exporter-hcwdr\" (UID: \"726c0300-292b-4b66-81a2-c370e0360f34\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.464421     376 secret.go:195] Couldn't get secret monitoring/alertmanager-main-token-z4b7q: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.464555     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume podName:0313bc4a-38b9-42cd-9a95-95ce0698787e nodeName:}" failed. No retries permitted until 2020-12-09 09:02:27.96443014 +0000 UTC m=+36.558293825 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume\") pod \"alertmanager-main-0\" (UID: \"0313bc4a-38b9-42cd-9a95-95ce0698787e\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.464757     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q podName:0313bc4a-38b9-42cd-9a95-95ce0698787e nodeName:}" failed. No retries permitted until 2020-12-09 09:02:27.964618174 +0000 UTC m=+36.558481896 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"alertmanager-main-token-z4b7q\" (UniqueName: \"kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q\") pod \"alertmanager-main-0\" (UID: \"0313bc4a-38b9-42cd-9a95-95ce0698787e\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.464912     376 reflector.go:127] object-"monitoring"/"alertmanager-main": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "alertmanager-main" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.638047     376 reflector.go:127] object-"kube-system"/"default-token-v9pqp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-v9pqp" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'pi4red' and this object
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.802437     376 reflector.go:127] object-"monitoring"/"alertmanager-main-token-z4b7q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "alertmanager-main-token-z4b7q" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.819825     376 reflector.go:127] object-"monitoring"/"node-exporter-token-svqfn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-exporter-token-svqfn" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:27 pi4red k3s[376]: E1209 09:02:27.837954     376 reflector.go:127] object-"monitoring"/"arm-exporter-token-9txj8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "arm-exporter-token-9txj8" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.360725     376 node.go:125] Failed to retrieve node info: nodes "pi4red" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.871850     376 secret.go:195] Couldn't get secret monitoring/arm-exporter-token-9txj8: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.872465     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8 podName:bd1e51a9-a315-4d53-8a45-a86442bf0998 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:29.872202777 +0000 UTC m=+38.466066758 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"arm-exporter-token-9txj8\" (UniqueName: \"kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8\") pod \"arm-exporter-k5hxx\" (UID: \"bd1e51a9-a315-4d53-8a45-a86442bf0998\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.872142     376 secret.go:195] Couldn't get secret kube-system/default-token-v9pqp: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.873834     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp podName:16010812-bd67-472e-a354-d88a3c531c70 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:29.873352739 +0000 UTC m=+38.467216572 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"default-token-v9pqp\" (UniqueName: \"kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp\") pod \"svclb-traefik-qql2l\" (UID: \"16010812-bd67-472e-a354-d88a3c531c70\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.972613     376 secret.go:195] Couldn't get secret monitoring/alertmanager-main-token-z4b7q: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.972689     376 secret.go:195] Couldn't get secret monitoring/node-exporter-token-svqfn: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.972877     376 secret.go:195] Couldn't get secret monitoring/alertmanager-main: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.973755     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/726c0300-292b-4b66-81a2-c370e0360f34-node-exporter-token-svqfn podName:726c0300-292b-4b66-81a2-c370e0360f34 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:29.972908207 +0000 UTC m=+38.566772059 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"node-exporter-token-svqfn\" (UniqueName: \"kubernetes.io/secret/726c0300-292b-4b66-81a2-c370e0360f34-node-exporter-token-svqfn\") pod \"node-exporter-hcwdr\" (UID: \"726c0300-292b-4b66-81a2-c370e0360f34\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.974208     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q podName:0313bc4a-38b9-42cd-9a95-95ce0698787e nodeName:}" failed. No retries permitted until 2020-12-09 09:02:29.973844859 +0000 UTC m=+38.567708581 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"alertmanager-main-token-z4b7q\" (UniqueName: \"kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q\") pod \"alertmanager-main-0\" (UID: \"0313bc4a-38b9-42cd-9a95-95ce0698787e\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:28 pi4red k3s[376]: E1209 09:02:28.974797     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume podName:0313bc4a-38b9-42cd-9a95-95ce0698787e nodeName:}" failed. No retries permitted until 2020-12-09 09:02:29.974310221 +0000 UTC m=+38.568173906 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume\") pod \"alertmanager-main-0\" (UID: \"0313bc4a-38b9-42cd-9a95-95ce0698787e\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:29 pi4red k3s[376]: E1209 09:02:29.880618     376 reflector.go:127] object-"monitoring"/"arm-exporter-token-9txj8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "arm-exporter-token-9txj8" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:29 pi4red k3s[376]: time="2020-12-09T09:02:29.896581500Z" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.344806     376 reflector.go:127] object-"kube-system"/"default-token-v9pqp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-v9pqp" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'pi4red' and this object
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.415788     376 reflector.go:127] object-"monitoring"/"alertmanager-main": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "alertmanager-main" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.536062     376 reflector.go:127] object-"monitoring"/"alertmanager-main-token-z4b7q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "alertmanager-main-token-z4b7q" is forbidden: User "system:node:pi4red" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node 'pi4red' and this object
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.884915     376 secret.go:195] Couldn't get secret monitoring/arm-exporter-token-9txj8: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.885342     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8 podName:bd1e51a9-a315-4d53-8a45-a86442bf0998 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:32.885182629 +0000 UTC m=+41.479046462 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"arm-exporter-token-9txj8\" (UniqueName: \"kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8\") pod \"arm-exporter-k5hxx\" (UID: \"bd1e51a9-a315-4d53-8a45-a86442bf0998\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.885472     376 secret.go:195] Couldn't get secret kube-system/default-token-v9pqp: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.885861     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp podName:16010812-bd67-472e-a354-d88a3c531c70 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:32.885724583 +0000 UTC m=+41.479588342 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"default-token-v9pqp\" (UniqueName: \"kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp\") pod \"svclb-traefik-qql2l\" (UID: \"16010812-bd67-472e-a354-d88a3c531c70\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.985378     376 secret.go:195] Couldn't get secret monitoring/node-exporter-token-svqfn: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.985647     376 secret.go:195] Couldn't get secret monitoring/alertmanager-main: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.985960     376 secret.go:195] Couldn't get secret monitoring/alertmanager-main-token-z4b7q: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.986319     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/726c0300-292b-4b66-81a2-c370e0360f34-node-exporter-token-svqfn podName:726c0300-292b-4b66-81a2-c370e0360f34 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:32.986090538 +0000 UTC m=+41.579954408 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"node-exporter-token-svqfn\" (UniqueName: \"kubernetes.io/secret/726c0300-292b-4b66-81a2-c370e0360f34-node-exporter-token-svqfn\") pod \"node-exporter-hcwdr\" (UID: \"726c0300-292b-4b66-81a2-c370e0360f34\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.987087     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q podName:0313bc4a-38b9-42cd-9a95-95ce0698787e nodeName:}" failed. No retries permitted until 2020-12-09 09:02:32.986648862 +0000 UTC m=+41.580512713 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"alertmanager-main-token-z4b7q\" (UniqueName: \"kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q\") pod \"alertmanager-main-0\" (UID: \"0313bc4a-38b9-42cd-9a95-95ce0698787e\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:30 pi4red k3s[376]: E1209 09:02:30.987355     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume podName:0313bc4a-38b9-42cd-9a95-95ce0698787e nodeName:}" failed. No retries permitted until 2020-12-09 09:02:32.987167149 +0000 UTC m=+41.581030890 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume\") pod \"alertmanager-main-0\" (UID: \"0313bc4a-38b9-42cd-9a95-95ce0698787e\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:32 pi4red k3s[376]: I1209 09:02:32.869621     376 node.go:136] Successfully retrieved node IP: 192.168.1.143
Dec 09 09:02:32 pi4red k3s[376]: I1209 09:02:32.869776     376 server_others.go:112] kube-proxy node IP is an IPv4 address (192.168.1.143), assume IPv4 operation
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.088693     376 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.088849     376 flannel.go:82] Running backend.
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.088903     376 vxlan_network.go:60] watching for new subnet leases
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.125075     376 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.125264     376 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.134480     376 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.135502     376 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.145900     376 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.159476     376 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.170411     376 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.176309     376 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.2.0/24 -j RETURN
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.187627     376 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.190356     376 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.196808     376 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.215374     376 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.230798     376 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.2.0/24 -j RETURN
Dec 09 09:02:33 pi4red k3s[376]: I1209 09:02:33.246085     376 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Dec 09 09:02:33 pi4red k3s[376]: E1209 09:02:33.907626     376 secret.go:195] Couldn't get secret kube-system/default-token-v9pqp: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:33 pi4red k3s[376]: E1209 09:02:33.907700     376 secret.go:195] Couldn't get secret monitoring/arm-exporter-token-9txj8: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:33 pi4red k3s[376]: E1209 09:02:33.908118     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp podName:16010812-bd67-472e-a354-d88a3c531c70 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:37.907945779 +0000 UTC m=+46.501809630 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"default-token-v9pqp\" (UniqueName: \"kubernetes.io/secret/16010812-bd67-472e-a354-d88a3c531c70-default-token-v9pqp\") pod \"svclb-traefik-qql2l\" (UID: \"16010812-bd67-472e-a354-d88a3c531c70\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:33 pi4red k3s[376]: E1209 09:02:33.908428     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8 podName:bd1e51a9-a315-4d53-8a45-a86442bf0998 nodeName:}" failed. No retries permitted until 2020-12-09 09:02:37.908281903 +0000 UTC m=+46.502145606 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"arm-exporter-token-9txj8\" (UniqueName: \"kubernetes.io/secret/bd1e51a9-a315-4d53-8a45-a86442bf0998-arm-exporter-token-9txj8\") pod \"arm-exporter-k5hxx\" (UID: \"bd1e51a9-a315-4d53-8a45-a86442bf0998\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:34 pi4red k3s[376]: E1209 09:02:34.011749     376 secret.go:195] Couldn't get secret monitoring/alertmanager-main-token-z4b7q: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:34 pi4red k3s[376]: E1209 09:02:34.012744     376 secret.go:195] Couldn't get secret monitoring/alertmanager-main: failed to sync secret cache: timed out waiting for the condition
Dec 09 09:02:34 pi4red k3s[376]: E1209 09:02:34.013491     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q podName:0313bc4a-38b9-42cd-9a95-95ce0698787e nodeName:}" failed. No retries permitted until 2020-12-09 09:02:38.013095377 +0000 UTC m=+46.606959414 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"alertmanager-main-token-z4b7q\" (UniqueName: \"kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-alertmanager-main-token-z4b7q\") pod \"alertmanager-main-0\" (UID: \"0313bc4a-38b9-42cd-9a95-95ce0698787e\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:34 pi4red k3s[376]: E1209 09:02:34.013992     376 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume podName:0313bc4a-38b9-42cd-9a95-95ce0698787e nodeName:}" failed. No retries permitted until 2020-12-09 09:02:38.01367096 +0000 UTC m=+46.607534645 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0313bc4a-38b9-42cd-9a95-95ce0698787e-config-volume\") pod \"alertmanager-main-0\" (UID: \"0313bc4a-38b9-42cd-9a95-95ce0698787e\") : failed to sync secret cache: timed out waiting for the condition"
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.114389     376 server_others.go:187] Using iptables Proxier.
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.115697     376 server.go:650] Version: v1.19.4+k3s1
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.121737     376 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.121871     376 conntrack.go:52] Setting nf_conntrack_max to 131072
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.173070     376 conntrack.go:83] Setting conntrack hashsize to 32768
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.219562     376 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.219728     376 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.220258     376 config.go:224] Starting endpoint slice config controller
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.220340     376 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.220388     376 config.go:315] Starting service config controller
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.220409     376 shared_informer.go:240] Waiting for caches to sync for service config
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.420491     376 shared_informer.go:247] Caches are synced for endpoint slice config
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.420502     376 shared_informer.go:247] Caches are synced for service config
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.460476     376 kubelet_node_status.go:108] Node pi4red was previously registered
Dec 09 09:02:34 pi4red k3s[376]: I1209 09:02:34.461196     376 kubelet_node_status.go:73] Successfully registered node pi4red
Dec 09 09:02:35 pi4red k3s[376]: W1209 09:02:35.561269     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/32ed650700a11b0337aca7dbfcaa1c09566238ee492981a199638c544e0b096b\""
Dec 09 09:02:37 pi4red k3s[376]: W1209 09:02:37.649744     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/6561d33b8d20b3a732f6afc74a3416ea3530c2465946851140a322f6e3b63547\""
Dec 09 09:02:39 pi4red k3s[376]: W1209 09:02:39.633457     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod726c0300-292b-4b66-81a2-c370e0360f34/f2865e1c3d625ce7715af85b8027afb173dc2a843fafd58d812d6bb5db681e58\""
Dec 09 09:02:39 pi4red k3s[376]: E1209 09:02:39.734587     376 remote_runtime.go:140] StopPodSandbox "641c199f30e9c5079863281919a5efd37b6a657069b745edcd475d9cc0043484" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find sandbox "641c199f30e9c5079863281919a5efd37b6a657069b745edcd475d9cc0043484": not found
Dec 09 09:02:39 pi4red k3s[376]: E1209 09:02:39.734708     376 kuberuntime_manager.go:909] Failed to stop sandbox {"containerd" "641c199f30e9c5079863281919a5efd37b6a657069b745edcd475d9cc0043484"}
Dec 09 09:02:39 pi4red k3s[376]: E1209 09:02:39.734876     376 kuberuntime_manager.go:688] killPodWithSyncResult failed: failed to "KillPodSandbox" for "16010812-bd67-472e-a354-d88a3c531c70" with KillPodSandboxError: "rpc error: code = NotFound desc = an error occurred when try to find sandbox \"641c199f30e9c5079863281919a5efd37b6a657069b745edcd475d9cc0043484\": not found"
Dec 09 09:02:39 pi4red k3s[376]: E1209 09:02:39.734964     376 pod_workers.go:191] Error syncing pod 16010812-bd67-472e-a354-d88a3c531c70 ("svclb-traefik-qql2l_kube-system(16010812-bd67-472e-a354-d88a3c531c70)"), skipping: failed to "KillPodSandbox" for "16010812-bd67-472e-a354-d88a3c531c70" with KillPodSandboxError: "rpc error: code = NotFound desc = an error occurred when try to find sandbox \"641c199f30e9c5079863281919a5efd37b6a657069b745edcd475d9cc0043484\": not found"
Dec 09 09:02:39 pi4red k3s[376]: E1209 09:02:39.955944     376 remote_runtime.go:140] StopPodSandbox "fc781888ca905db13f0560a6071cc20a24f9ac64a3de9778c2b12d287eed71ad" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find sandbox "fc781888ca905db13f0560a6071cc20a24f9ac64a3de9778c2b12d287eed71ad": not found
Dec 09 09:02:39 pi4red k3s[376]: E1209 09:02:39.956044     376 kuberuntime_manager.go:909] Failed to stop sandbox {"containerd" "fc781888ca905db13f0560a6071cc20a24f9ac64a3de9778c2b12d287eed71ad"}
Dec 09 09:02:39 pi4red k3s[376]: E1209 09:02:39.956158     376 kuberuntime_manager.go:688] killPodWithSyncResult failed: failed to "KillPodSandbox" for "bd1e51a9-a315-4d53-8a45-a86442bf0998" with KillPodSandboxError: "rpc error: code = NotFound desc = an error occurred when try to find sandbox \"fc781888ca905db13f0560a6071cc20a24f9ac64a3de9778c2b12d287eed71ad\": not found"
Dec 09 09:02:39 pi4red k3s[376]: E1209 09:02:39.956225     376 pod_workers.go:191] Error syncing pod bd1e51a9-a315-4d53-8a45-a86442bf0998 ("arm-exporter-k5hxx_monitoring(bd1e51a9-a315-4d53-8a45-a86442bf0998)"), skipping: failed to "KillPodSandbox" for "bd1e51a9-a315-4d53-8a45-a86442bf0998" with KillPodSandboxError: "rpc error: code = NotFound desc = an error occurred when try to find sandbox \"fc781888ca905db13f0560a6071cc20a24f9ac64a3de9778c2b12d287eed71ad\": not found"
Dec 09 09:02:40 pi4red k3s[376]: E1209 09:02:40.028871     376 remote_runtime.go:140] StopPodSandbox "db25cf3726f3970dd61b50ff4f2d1777d74c0001c1fd61a4840be58c7dc4b271" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find sandbox "db25cf3726f3970dd61b50ff4f2d1777d74c0001c1fd61a4840be58c7dc4b271": not found
Dec 09 09:02:40 pi4red k3s[376]: E1209 09:02:40.028972     376 kuberuntime_manager.go:909] Failed to stop sandbox {"containerd" "db25cf3726f3970dd61b50ff4f2d1777d74c0001c1fd61a4840be58c7dc4b271"}
Dec 09 09:02:40 pi4red k3s[376]: E1209 09:02:40.029100     376 kuberuntime_manager.go:688] killPodWithSyncResult failed: failed to "KillPodSandbox" for "0313bc4a-38b9-42cd-9a95-95ce0698787e" with KillPodSandboxError: "rpc error: code = NotFound desc = an error occurred when try to find sandbox \"db25cf3726f3970dd61b50ff4f2d1777d74c0001c1fd61a4840be58c7dc4b271\": not found"
Dec 09 09:02:40 pi4red k3s[376]: E1209 09:02:40.029193     376 pod_workers.go:191] Error syncing pod 0313bc4a-38b9-42cd-9a95-95ce0698787e ("alertmanager-main-0_monitoring(0313bc4a-38b9-42cd-9a95-95ce0698787e)"), skipping: failed to "KillPodSandbox" for "0313bc4a-38b9-42cd-9a95-95ce0698787e" with KillPodSandboxError: "rpc error: code = NotFound desc = an error occurred when try to find sandbox \"db25cf3726f3970dd61b50ff4f2d1777d74c0001c1fd61a4840be58c7dc4b271\": not found"
Dec 09 09:02:43 pi4red k3s[376]: W1209 09:02:43.258924     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod0313bc4a-38b9-42cd-9a95-95ce0698787e/fab67aca3284ee056c4d84efa24342f4d31992d79d7c3058ca17a66f7d218d93\""
Dec 09 09:02:44 pi4red k3s[376]: time="2020-12-09T09:02:44.923279039Z" level=error msg="Remotedialer proxy error" error="read tcp 192.168.1.143:59738->192.168.1.142:6443: i/o timeout"
Dec 09 09:02:44 pi4red k3s[376]: W1209 09:02:44.998470     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/podbd1e51a9-a315-4d53-8a45-a86442bf0998/25bf1a466cac604ab31c673ef082f3501bf58b99ef065deaa30d9c2428d24e5f\""
Dec 09 09:02:45 pi4red k3s[376]: W1209 09:02:45.006111     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/besteffort/pod16010812-bd67-472e-a354-d88a3c531c70/2660c852e21e3ef9039ca21a14735b106c8419d82787ea03346ae8d9a23130dd\""
Dec 09 09:02:45 pi4red k3s[376]: W1209 09:02:45.015230     376 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods/burstable/pod0313bc4a-38b9-42cd-9a95-95ce0698787e/fd7a4fd4e5c213aaa659cd19fb0e77d5ee124ba063660f83199c0860f688942d\""
Dec 09 09:02:47 pi4red k3s[376]: time="2020-12-09T09:02:47.339973209Z" level=fatal msg="Get \"https://127.0.0.1:40711/apis/networking.k8s.io/v1/networkpolicies\": net/http: TLS handshake timeout"
Dec 09 09:02:47 pi4red systemd[1]: k3s-agent.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 09:02:47 pi4red systemd[1]: k3s-agent.service: Failed with result 'exit-code'.

Your server logs are all truncated...

sorry I was going quickly and having issues connecting into the PI's. I have edited the logs in the comment

So the issues today seem to be errors from my side configuring. All seems to run normal now. I will be closing the issue since I don't think there was anything related to k3s.

Thanks a lot for the support!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

e-nikolov picture e-nikolov  Â·  3Comments

ubergeek801 picture ubergeek801  Â·  3Comments

ewoutp picture ewoutp  Â·  4Comments

seanmalloy picture seanmalloy  Â·  3Comments

VictorRobellini picture VictorRobellini  Â·  3Comments