Describe the bug
I downloaded k3s on Redhat 7 and I wait for the cluster to be up, but it never be up.
I adjust firewalld to accept 6443/tcp, but no benefit.
To Reproduce
Steps to reproduce the behavior:
curl -sfL https://get.k3s.io | sh -watch -n 3 k3s kubectl get node.systemctl status k3s but we got: systemctl status k3s -l
โ k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2019-04-26 20:45:29 UTC; 10min ago
Docs: https://k3s.io
Process: 4157 ExecStart=/usr/local/bin/k3s server (code=exited, status=1/FAILURE)
Process: 4155 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Process: 4154 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Main PID: 4157 (code=exited, status=1/FAILURE)
Apr 26 20:45:28 control1 k3s[4157]: time="2019-04-26T20:45:28.832482434Z" level=info msg="Run: k3s kubectl"
Apr 26 20:45:28 control1 k3s[4157]: time="2019-04-26T20:45:28.832497807Z" level=info msg="k3s is up and running"
Apr 26 20:45:28 control1 systemd[1]: Started Lightweight Kubernetes.
Apr 26 20:45:28 control1 k3s[4157]: time="2019-04-26T20:45:28.936343517Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Apr 26 20:45:28 control1 k3s[4157]: time="2019-04-26T20:45:28.937828597Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Apr 26 20:45:28 control1 k3s[4157]: time="2019-04-26T20:45:28.942713193Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Apr 26 20:45:29 control1 k3s[4157]: containerd: exit status 1
Apr 26 20:45:29 control1 systemd[1]: k3s.service: main process exited, code=exited, status=1/FAILURE
Apr 26 20:45:29 control1 systemd[1]: Unit k3s.service entered failed state.
Apr 26 20:45:29 control1 systemd[1]: k3s.service failed.
Expected behavior
I expect to
Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
uname -a : Linux control1 3.10.0-862.3.2.el7.x86_64 rancher/k3s#1 SMP Tue May 15 18:22:15 EDT 2018 x86_64 x86_64 x86_64 GNU/Linuxgz#11230
There is more that needs to be done with the firewall setup, here are some other potential ports:
Kubernetes needs:
Master node(s):
TCP 6443* Kubernetes API Server
TCP 10250 Kubelet API
TCP 10251 kube-scheduler
TCP 10252 kube-controller-manager
UDP 8285 flannel overlay network - udp backend
Worker nodes (minions):
TCP 10250 Kubelet API
TCP 30000-32767 NodePort Services
UDP 8285 flannel overlay network - udp backend
Also see https://github.com/coreos/coreos-kubernetes/blob/master/Documentation/kubernetes-networking.md
The firewall will also probably need to be setup to allow traffic between various interfaces.
I am marking this issue as kind/documentation because we should provide better requirements in docs in https://github.com/rancher/k3s#open-ports--network-security on what is needed for a firewall setup.
cat /var/lib/rancher/k3s/agent/containerd/containerd.log
I bet you'll see something like I was:
time="2019-04-29T16:15:41.094592934-04:00" level=info msg="containerd successfully booted in 0.002523s"
time="2019-04-29T16:15:41.097621565-04:00" level=info msg="Start subscribing containerd event"
time="2019-04-29T16:15:41.097658651-04:00" level=info msg="Start recovering state"
time="2019-04-29T16:15:41.097764992-04:00" level=info msg="Start event monitor"
time="2019-04-29T16:15:41.097784041-04:00" level=info msg="Start snapshots syncer"
time="2019-04-29T16:15:41.097793303-04:00" level=info msg="Start streaming server"
time="2019-04-29T16:15:41.098765431-04:00" level=error msg="Failed to start streaming server" error="listen tcp: lookup myhostname on 192.168.1.2:53: no such host"
I was seeing the same exact error as you on Arch linux. My hostname was not resolvable, so containerd was not starting.
I installed this in Centos 7.6.
And I added the cni0 to firewall internal.
@thatarchguy check the hostname have a record the host
127.0.0.1 xxx.node.local
I also encountered this on CentOS 7.7 AArch64.
For my part I had no intention of using firewalld on these systems (instead opting to use traditional iptables), and it's easy to work around this that way.
I did notice that the k3s installation "succeeds" despite the fact that services like CoreDNS can not reach the Kubernetes API. I would advocate that as a part of considering this issue resolved, the installer fail the installation and notify the user so that it's clear sooner rather than later that networking is not functional on the system.
I have just been through this issue, it'd be great if there was a check in k3s check-config for CentOS + firewalld (or something). Here's how I fixed it (from https://www.thegeekdiary.com/how-to-disable-firewalld-and-and-switch-to-iptables-in-centos-rhel-7/):
If k3s is running (but kube-system pods are failing to reach svc/kubernetes:
k3s-killall.sh
k3s-uninstall.sh
Then remove firewalld and replace with a clean iptables
systemctl stop firewalld
systemctl disable firewalld
yum install iptables-services
systemctl start iptables
systemctl enable iptables
Now reinstall + start K3s
Most helpful comment
I have just been through this issue, it'd be great if there was a check in
k3s check-configforCentOS + firewalld(or something). Here's how I fixed it (from https://www.thegeekdiary.com/how-to-disable-firewalld-and-and-switch-to-iptables-in-centos-rhel-7/):If k3s is running (but kube-system pods are failing to reach
svc/kubernetes:Then remove
firewalldand replace with a cleaniptablesNow reinstall + start K3s