K3s: Job for k3s.service failed because the control process exited with error code.

Created on 4 Nov 2019  ยท  16Comments  ยท  Source: k3s-io/k3s

Hello World!

I'm trying to follow Single Master Install, yet running into following issue:

Version:

$ k3s -v
k3s version v0.10.2 (8833bfd9)
$ 
$ curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.10.2 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.10.2/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.10.2/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  SELinux is enabled, setting permissions
which: no kubectl in (/home/toor/.local/bin:/home/toor/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/var/lib/snapd/snap/bin)
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
which: no crictl in (/home/toor/.local/bin:/home/toor/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/var/lib/snapd/snap/bin)
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
which: no ctr in (/home/toor/.local/bin:/home/toor/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/var/lib/snapd/snap/bin)
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service โ†’ /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details.
$ systemctl status k3s.service
โ— k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Mon 2019-11-04 15:03:53 EST; 1s ago
     Docs: https://k3s.io
  Process: 6988 ExecStart=/usr/local/bin/k3s server (code=exited, status=1/FAILURE)
  Process: 6986 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 6984 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
 Main PID: 6988 (code=exited, status=1/FAILURE)
$ 

tried as root

$ sudo su -
# curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.10.2 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.10.2/sha256sum-amd64.txt
[INFO]  Skipping binary downloaded, installed k3s matches hash
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/crictl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, already exists
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service โ†’ /etc/systemd/system/k3s.service.
[INFO]  No change detected so skipping service start
# systemctl status k3s.service 
โ— k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2019-11-04 15:08:39 EST; 6s ago
     Docs: https://k3s.io
 Main PID: 14686 (code=exited, status=1/FAILURE)

Nov 04 15:08:39 noc.uftwf.local systemd[1]: k3s.service: Service RestartSec=100ms expired, scheduling restart.
Nov 04 15:08:39 noc.uftwf.local systemd[1]: k3s.service: Failed to schedule restart job: Unit k3s.service not found.
Nov 04 15:08:39 noc.uftwf.local systemd[1]: k3s.service: Failed with result 'exit-code'.
# 

My OS:

# cat /etc/redhat-release 
CentOS Linux release 8.0.1905 (Core) 
# uname -a
Linux X.X.X 4.18.0-80.11.2.el8_0.x86_64 rancher/k3s#1 SMP Tue Sep 24 11:32:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# 

Please advice.

Unscheduled statumore-info

All 16 comments

Me too!

This is a generic error that will appear when any error happens.

Please check /var/log/syslog or journalctl -xe.

# systemctl status k3s.service 
โ— k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2019-11-04 15:08:39 EST; 21h ago
     Docs: https://k3s.io
 Main PID: 14686 (code=exited, status=1/FAILURE)

Nov 04 15:08:39 noc.uftwf.local systemd[1]: k3s.service: Service RestartSec=100ms expired, scheduling restart.
Nov 04 15:08:39 noc.uftwf.local systemd[1]: k3s.service: Failed to schedule restart job: Unit k3s.service not found.
Nov 04 15:08:39 noc.uftwf.local systemd[1]: k3s.service: Failed with result 'exit-code'.
# systemctl start k3s.service 
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details.
# systemctl status k3s.service
โ— k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Tue 2019-11-05 12:10:29 EST; 99ms ago
     Docs: https://k3s.io
  Process: 700 ExecStart=/usr/local/bin/k3s server (code=exited, status=1/FAILURE)
  Process: 698 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 695 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
 Main PID: 700 (code=exited, status=1/FAILURE)
# journalctl -xe
-- 
-- Automatic restarting of the unit k3s.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Nov 05 12:10:28 noc.uftwf.local systemd[1]: Stopped Lightweight Kubernetes.
-- Subject: Unit k3s.service has finished shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- Unit k3s.service has finished shutting down.
Nov 05 12:10:28 noc.uftwf.local systemd[1]: Starting Lightweight Kubernetes...
-- Subject: Unit k3s.service has begun start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- Unit k3s.service has begun starting up.
Nov 05 12:10:29 noc.uftwf.local k3s[700]: time="2019-11-05T12:10:29.055600220-05:00" level=info msg="Starting k3s v0.10.2 (8833bfd9)"
Nov 05 12:10:29 noc.uftwf.local k3s[700]: time="2019-11-05T12:10:29.057117884-05:00" level=info msg="Kine listening on unix://kine.sock"
Nov 05 12:10:29 noc.uftwf.local k3s[700]: time="2019-11-05T12:10:29.057893632-05:00" level=info msg="Fetching bootstrap data from etcd"
Nov 05 12:10:29 noc.uftwf.local k3s[700]: time="2019-11-05T12:10:29.068466632-05:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=u>
Nov 05 12:10:29 noc.uftwf.local k3s[700]: Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production enviro>
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.068925     700 server.go:650] external host was not specified, using 216.236.150.116
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.069061     700 server.go:162] Version: v1.16.2-k3s.1
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.071810     700 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAcco>
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.071820     700 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,Persi>
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.072298     700 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAcco>
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.072306     700 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,Persi>
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.086808     700 master.go:259] Using reconciler: lease
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.101473     700 rest.go:115] the default service ipfamily for this cluster is: IPv4
Nov 05 12:10:29 noc.uftwf.local k3s[700]: W1105 12:10:29.269844     700 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
Nov 05 12:10:29 noc.uftwf.local k3s[700]: W1105 12:10:29.281067     700 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 05 12:10:29 noc.uftwf.local k3s[700]: W1105 12:10:29.291774     700 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 05 12:10:29 noc.uftwf.local k3s[700]: W1105 12:10:29.293758     700 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 05 12:10:29 noc.uftwf.local k3s[700]: W1105 12:10:29.300245     700 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 05 12:10:29 noc.uftwf.local k3s[700]: W1105 12:10:29.312220     700 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
Nov 05 12:10:29 noc.uftwf.local k3s[700]: W1105 12:10:29.312240     700 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.317708     700 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAcco>
Nov 05 12:10:29 noc.uftwf.local k3s[700]: I1105 12:10:29.317725     700 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,Persi>
Nov 05 12:10:29 noc.uftwf.local k3s[700]: time="2019-11-05T12:10:29.321842133-05:00" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubec>
Nov 05 12:10:29 noc.uftwf.local k3s[700]: failed to create listener: failed to listen on 0.0.0.0:10251: listen tcp 0.0.0.0:10251: bind: address already in use
Nov 05 12:10:29 noc.uftwf.local systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 05 12:10:29 noc.uftwf.local systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 05 12:10:29 noc.uftwf.local systemd[1]: Failed to start Lightweight Kubernetes.
-- Subject: Unit k3s.service has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- Unit k3s.service has failed.
-- 
-- The result is RESULT.
# 
# microk8s.stop
Stopped.
# netstat -pvnat | grep 10251
tcp6       0      0 :::10251                :::*                    LISTEN      2774/k3s            
# systemctl status k3s.service
โ— k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-11-05 12:12:37 EST; 12s ago
     Docs: https://k3s.io
  Process: 2772 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 2770 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
 Main PID: 2774 (k3s-server)
    Tasks: 14
   Memory: 370.4M
   CGroup: /system.slice/k3s.service
           โ””โ”€2808 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
           โ€ฃ 2774 /usr/local/bin/k3s server

Nov 05 12:12:39 noc.uftwf.local k3s[2774]: I1105 12:12:39.958502    2774 node_ipam_controller.go:94] Sending events to api server.
Nov 05 12:12:39 noc.uftwf.local k3s[2774]: time="2019-11-05T12:12:39.968252222-05:00" level=info msg="master role label has been set succesfully on node: noc.uftwf.local"
Nov 05 12:12:40 noc.uftwf.local k3s[2774]: I1105 12:12:40.177803    2774 node.go:135] Successfully retrieved node IP: 216.236.150.116
Nov 05 12:12:40 noc.uftwf.local k3s[2774]: I1105 12:12:40.177825    2774 server_others.go:150] Using iptables Proxier.
Nov 05 12:12:40 noc.uftwf.local k3s[2774]: I1105 12:12:40.179652    2774 server.go:529] Version: v1.16.2-k3s.1
Nov 05 12:12:40 noc.uftwf.local k3s[2774]: I1105 12:12:40.179990    2774 conntrack.go:52] Setting nf_conntrack_max to 262144
Nov 05 12:12:40 noc.uftwf.local k3s[2774]: I1105 12:12:40.180155    2774 config.go:131] Starting endpoints config controller
Nov 05 12:12:40 noc.uftwf.local k3s[2774]: I1105 12:12:40.180180    2774 shared_informer.go:197] Waiting for caches to sync for endpoints config
Nov 05 12:12:40 noc.uftwf.local k3s[2774]: I1105 12:12:40.180187    2774 config.go:313] Starting service config controller
Nov 05 12:12:40 noc.uftwf.local k3s[2774]: I1105 12:12:40.180206    2774 shared_informer.go:197] Waiting for caches to sync for service config
# 

What is running on port 10251?
sudo netstat -tlnp | grep 10251

It looks like there is already another k3s process running...
did you start it manually from another session?

What is running on port 10251?
sudo netstat -tlnp | grep 10251

microk8s

It looks like there is already another k3s process running...
did you start it manually from another session?

actually, it was microk8s, as I mentioned in https://github.com/rancher/k3s/issues/1011#issuecomment-549918957, after running microk8s.stop, k3s.service started without me)

Looks like this is resolved :)
If not, need more information, or anything related to the issue let us know and reopen the issue.

I got this issue today. Install via curl -sfL https://get.k3s.io | sh - on Raspberry Pi 3B+ Buster.
k3s kubectl get node givesThe connection to the server localhost:8080 was refused - did you specify the right host or port?

Some findings:

  • There is no rancher dir in /etc cd: /etc/rancher: No such file or directory
  • It looks like cluster is up
pi@raspberrypi:~ $ sudo k3s server
INFO[2019-11-12T20:07:04.709328050-07:00] Starting k3s v0.10.2 (8833bfd9)              
INFO[2019-11-12T20:07:04.726141653-07:00] Kine listening on unix://kine.sock           
INFO[2019-11-12T20:07:04.734110152-07:00] Fetching bootstrap data from etcd            
INFO[2019-11-12T20:07:04.816882346-07:00] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key 
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
I1112 20:07:04.822777   17595 server.go:650] external host was not specified, using 10.0.0.244
I1112 20:07:04.826443   17595 server.go:162] Version: v1.16.2-k3s.1
Error: failed to create listener: failed to listen on 127.0.0.1:6444: listen tcp 127.0.0.1:6444: bind: address already in use
pi@raspberrypi:~ $ sudo netstat -tulpn | grep :6444
tcp       14      0 127.0.0.1:6444          0.0.0.0:*               LISTEN      17810/k3s
pi@raspberrypi:~ $ journalctl -xe
Nov 12 20:08:06 raspberrypi k3s[17942]: I1112 20:08:06.678593   17942 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: 
Nov 12 20:08:06 raspberrypi k3s[17942]: I1112 20:08:06.688638   17942 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: N
Nov 12 20:08:06 raspberrypi k3s[17942]: I1112 20:08:06.688803   17942 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: 
Nov 12 20:08:06 raspberrypi k3s[17942]: I1112 20:08:06.962927   17942 master.go:259] Using reconciler: lease
Nov 12 20:08:07 raspberrypi k3s[17942]: I1112 20:08:07.183948   17942 rest.go:115] the default service ipfamily for this cluster is: IPv4
Nov 12 20:08:10 raspberrypi k3s[17942]: W1112 20:08:10.273911   17942 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
Nov 12 20:08:10 raspberrypi k3s[17942]: W1112 20:08:10.503994   17942 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 12 20:08:10 raspberrypi k3s[17942]: W1112 20:08:10.737114   17942 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resour
Nov 12 20:08:10 raspberrypi k3s[17942]: W1112 20:08:10.779807   17942 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 12 20:08:10 raspberrypi k3s[17942]: W1112 20:08:10.923861   17942 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 12 20:08:11 raspberrypi k3s[17942]: W1112 20:08:11.166184   17942 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
Nov 12 20:08:11 raspberrypi k3s[17942]: W1112 20:08:11.166328   17942 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
Nov 12 20:08:11 raspberrypi k3s[17942]: I1112 20:08:11.287294   17942 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: N
Nov 12 20:08:11 raspberrypi k3s[17942]: I1112 20:08:11.287464   17942 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: 
Nov 12 20:08:11 raspberrypi k3s[17942]: time="2019-11-12T20:08:11.349847618-07:00" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/li
Nov 12 20:08:11 raspberrypi k3s[17942]: time="2019-11-12T20:08:11.356231083-07:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-ad
Nov 12 20:08:11 raspberrypi k3s[17942]: I1112 20:08:11.460046   17942 controllermanager.go:161] Version: v1.16.2-k3s.1
Nov 12 20:08:11 raspberrypi k3s[17942]: I1112 20:08:11.483559   17942 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Nov 12 20:08:11 raspberrypi k3s[17942]: I1112 20:08:11.496306   17942 server.go:143] Version: v1.16.2-k3s.1
Nov 12 20:08:11 raspberrypi k3s[17942]: I1112 20:08:11.498680   17942 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Nov 12 20:08:11 raspberrypi k3s[17942]: W1112 20:08:11.518337   17942 authorization.go:47] Authorization is disabled
Nov 12 20:08:11 raspberrypi k3s[17942]: W1112 20:08:11.519776   17942 authentication.go:79] Authentication is disabled
Nov 12 20:08:11 raspberrypi k3s[17942]: I1112 20:08:11.521073   17942 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Nov 12 20:08:21 raspberrypi k3s[17942]: time="2019-11-12T20:08:21.398208897-07:00" level=fatal msg="starting tls server: Get https://127.0.0.1:6444/apis/apiextensions.
Nov 12 20:08:21 raspberrypi systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- An ExecStart= process belonging to unit k3s.service has exited.
-- 
-- The process' exit code is 'exited' and its exit status is 1.
Nov 12 20:08:21 raspberrypi systemd[1]: k3s.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- The unit k3s.service has entered the 'failed' state with result 'exit-code'.
Nov 12 20:08:21 raspberrypi systemd[1]: Failed to start Lightweight Kubernetes.
-- Subject: A start job for unit k3s.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A start job for unit k3s.service has finished with a failure.
-- 
-- The job identifier is 8093 and the job result is failed

going to reopen based on @sonttran's latest comment so that we can investigate a little more

I am having the same issue as @sonttran with installing on raspberry pi 3 B + with raspbian buster lite. I get the same messages from journalctl

I got this issue today. I was trying a HA K3s install, I setup the infra (VMs, LB, MySQL DB etc).

Installed via -

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--write-kubeconfig-mode 644 \
--datastore-endpoint=mysql://xxxx:xxxx@tcp(mysql.k3s_db \
INSTALL_K3s_VERSION='v1.17.0+k3s.1'" sh -s - server

Machine Details

cat /etc/os-release

NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

uname -a

Linux ip-172-31-18-165 4.15.0-1065-aws #69-Ubuntu SMP Thu Mar 26 02:17:29 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

K3s version

k3s version v1.18.3+k3s1 (96653e8d)

ERROR

[INFO]  Finding release for channel stable
[INFO]  Using v1.18.3+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service โ†’ /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Job for k3s.service failed because the control process exited with error code.

journalctl -xe

ubuntu@ip-172-31-29-48:/$ journalctl -xe
Jun 02 11:11:55 ip-172-31-29-48 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Jun 02 11:11:55 ip-172-31-29-48 systemd[1]: k3s.service: Failed with result 'exit-code'.
Jun 02 11:11:55 ip-172-31-29-48 systemd[1]: Failed to start Lightweight Kubernetes.
-- Subject: Unit k3s.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit k3s.service has failed.
--
-- The result is RESULT.
Jun 02 11:11:57 ip-172-31-29-48 sudo[20135]: pam_unix(sudo:session): session closed for user root
Jun 02 11:12:00 ip-172-31-29-48 systemd[1]: k3s.service: Service hold-off time over, scheduling restart.
Jun 02 11:12:00 ip-172-31-29-48 systemd[1]: k3s.service: Scheduled restart job, restart counter is at 48.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit k3s.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Jun 02 11:12:00 ip-172-31-29-48 systemd[1]: Stopped Lightweight Kubernetes.
-- Subject: Unit k3s.service has finished shutting down
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit k3s.service has finished shutting down.
Jun 02 11:12:00 ip-172-31-29-48 systemd[1]: Starting Lightweight Kubernetes...
-- Subject: Unit k3s.service has begun start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit k3s.service has begun starting up.
Jun 02 11:12:00 ip-172-31-29-48 k3s[23810]: time="2020-06-02T11:12:00.487224424Z" level=info msg="Starting k3s v1.18.3+k3s1 (96653e8d)"
Jun 02 11:12:00 ip-172-31-29-48 k3s[23810]: time="2020-06-02T11:12:00.490052713Z" level=info msg="Cluster bootstrap already complete"

systemctl status k3s

 k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
   Active: activating (start) since Tue 2020-06-02 11:14:16 UTC; 1min 45s ago
     Docs: https://k3s.io
  Process: 23863 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 23858 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
 Main PID: 23864 (k3s-server)
    Tasks: 10
   CGroup: /system.slice/k3s.service
           โ””โ”€23864 /usr/local/bin/k3s server --write-kubeconfig-mode 644 --datastore-endpoint=mysql://admin:Unisys*1234@tcp(falcon-mysql.c

Jun 02 11:14:16 ip-172-31-29-48 systemd[1]: Starting Lightweight Kubernetes...
Jun 02 11:14:16 ip-172-31-29-48 k3s[23864]: time="2020-06-02T11:14:16.489584613Z" level=info msg="Starting k3s v1.18.3+k3s1 (96653e8d)"
Jun 02 11:14:16 ip-172-31-29-48 k3s[23864]: time="2020-06-02T11:14:16.493185496Z" level=info msg="Cluster bootstrap already complete"

@shivdeepnv it appears to be hanging connecting to your datastore endpoint. Are you sure that the mysql server is accessible from your k3s node?

Thanks @brandond, it was an issue because it was not able to reach the MySQL DB. I made changes to the security groups, SSHed to ensure that the connection works and then ran the install again. It worked.

do you install microk8s, in my situation, I had installed microk8s, it crach with k3s

Was this page helpful?
0 / 5 - 0 ratings

Related issues

theonewolf picture theonewolf  ยท  3Comments

e-nikolov picture e-nikolov  ยท  3Comments

Moep90 picture Moep90  ยท  3Comments

gilkotton picture gilkotton  ยท  3Comments

pierreozoux picture pierreozoux  ยท  4Comments