Fails with:
$ sudo k3s server
[sudo] password for stratos:
INFO[2019-02-27T09:33:24.808161017+01:00] Starting k3s v0.1.0 (91251aa)
INFO[2019-02-27T09:33:24.808756152+01:00] Running kube-apiserver --watch-cache=false --cert-dir /var/lib/rancher/k3s/server/tls/temporary-certs --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key
INFO[2019-02-27T09:33:24.871861833+01:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false
INFO[2019-02-27T09:33:24.872329109+01:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false
INFO[2019-02-27T09:33:24.932063795+01:00] Listening on :6443
INFO[2019-02-27T09:33:25.035386408+01:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
INFO[2019-02-27T09:33:25.035734075+01:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml
INFO[2019-02-27T09:33:25.437915586+01:00] Node token is available at /var/lib/rancher/k3s/server/node-token
INFO[2019-02-27T09:33:25.437980492+01:00] To join node to cluster: k3s agent -s https://10.66.180.31:6443 -t ${NODE_TOKEN}
INFO[2019-02-27T09:33:25.527701806+01:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
INFO[2019-02-27T09:33:25.527719937+01:00] Run: k3s kubectl
INFO[2019-02-27T09:33:25.527726410+01:00] k3s is up and running
INFO[2019-02-27T09:33:25.560836269+01:00] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2019-02-27T09:33:25.560947659+01:00] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[2019-02-27T09:33:25.561094622+01:00] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
INFO[2019-02-27T09:33:26.568673623+01:00] Connecting to wss://localhost:6443/v1-k3s/connect
INFO[2019-02-27T09:33:26.568778009+01:00] Connecting to proxy url="wss://localhost:6443/v1-k3s/connect"
INFO[2019-02-27T09:33:26.582968162+01:00] Handling backend connection request [serenity]
INFO[2019-02-27T09:33:26.586584725+01:00] Running kubelet --healthz-bind-address 127.0.0.1 --read-only-port 0 --allow-privileged=true --cluster-domain cluster.local --kubeconfig /var/lib/rancher/k3s/agent/kubeconfig.yaml --eviction-hard imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --cgroup-driver cgroupfs --root-dir /var/lib/rancher/k3s/agent/kubelet --cert-dir /var/lib/rancher/k3s/agent/kubelet/pki --seccomp-profile-root /var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir /var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin --cluster-dns 10.43.0.10 --container-runtime remote --container-runtime-endpoint unix:///run/k3s/containerd/containerd.sock --address 127.0.0.1 --anonymous-auth=false --client-ca-file /var/lib/rancher/k3s/agent/client-ca.pem --hostname-override serenity
Flag --allow-privileged has been deprecated, will be removed in a future version
FATA[2019-02-27T09:33:27.619684908+01:00] fannel exited: operation not supported
Installation (with the curl script) seemed to work correctly. Running on:
Linux serenity 4.20.11-arch2-1-ARCH #1 SMP PREEMPT Fri Feb 22 13:09:33 UTC 2019 x86_64 GNU/Linux
What logs should I be looking at for any clues?
Thanks!
@stratosgear To get some better logs can you try running k3s --debug server. Is there a way to easily reproduce your setup?
I do get more log output from the --debug param:
https://gist.github.com/stratosgear/880e032c2ee1459e86e9e380268050a7
but (at least in my eyes) nothing that might explain why fannel fails. Not that I'm that experienced in fannel, but I think if there was anything useful there, I would have seen it (I hope)
I do not have a particular setup. I do use Docker and docker-compose a lot, but I think this should not be relevant. I would be able to provide any specific logs, if you have some in mind.
Can you share modinfo vxlan and lsmod | grep vxlan? Don't think this module is present in your kernel.
Yep, I can reproduce this issue in an archlinux box which without the vxlan module.
Hmm, vxlan was missing, indeed.
sudo modprobe vxlan fixed it.
Works now, I will continue playing around. Thanks!
Most helpful comment
Can you share
modinfo vxlanandlsmod | grep vxlan? Don't think this module is present in your kernel.