K3s: Windows SubSystem for Linux failed to find memory cgroup

Created on 14 May 2019  Â·  4Comments  Â·  Source: k3s-io/k3s

Thanks for helping us to improve k3s! We welcome all bug reports. At this stage, we are also looking for help in testing/QAing fixes. Once we've fixed you're issue, we'll ping you in the comments to see if you can verify the fix. We'll give you details on what version can be used to test the fix. Additionally, if you are interested in testing fixes that you didn't report, look for the issues with the status/to-test label. You can pick any of these up for verification. You can delete this message portion of the bug report.

Describe the bug

k3s version: v0.5.0

Starting k3s server per the docs, console output shows k3s starts but then immediately throws a FATAL error.

I am NOT running a Raspberry Pi.

I AM running on Windows SubSystem for Linux (WSL) with Ubuntu 18.04.

sudo k3s server
...
INFO[2019-05-14T13:46:48.910677100+01:00] k3s is up and running
WARN[2019-05-14T13:46:48.911812900+01:00] Failed to find cpuset cgroup, you may need to add "cgroup_enable=cpuset" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)
ERRO[2019-05-14T13:46:48.913161300+01:00] Failed to find memory cgroup, you may need to add "cgroup_memory=1 cgroup_enable=memory" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)
FATA[2019-05-14T13:46:48.914658500+01:00] failed to find memory cgroup, you may need to add "cgroup_memory=1 cgroup_enable=memory" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)

To Reproduce
Run sudo k3s server on WSL

Expected behavior
k3s starts successfully and continues to run

Screenshots
Console output:

sudo k3s server
INFO[2019-05-14T13:46:48.086597700+01:00] Starting k3s v0.5.0 (8c0116dd)
INFO[2019-05-14T13:46:48.089043300+01:00] Running kube-apiserver --authorization-mode=Node,RBAC --advertise-address=127.0.0.1 --service-account-issuer=k3s --api-audiences=unknown --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-allowed-names=kubernetes-proxy --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --insecure-port=0 --bind-address=127.0.0.1 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/localhost.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --advertise-port=6445 --secure-port=6444 --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --kubelet-client-key=/var/lib/rancher/k3s/server/tls/token-node.key --watch-cache=false --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --allow-privileged=true --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/var/lib/rancher/k3s/server/tls/localhost.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/token-node-1.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
E0514 13:46:48.115223     209 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.135788     209 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.136977     209 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.138052     209 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.138934     209 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.139967     209 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 13:46:48.256450     209 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources.
W0514 13:46:48.292561     209 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources.
E0514 13:46:48.333835     209 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.334476     209 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.335371     209 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.336190     209 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.336944     209 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0514 13:46:48.337771     209 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
INFO[2019-05-14T13:46:48.348955300+01:00] Running kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --leader-elect=false --port=10251 --bind-address=127.0.0.1 --secure-port=0
E0514 13:46:48.355677     209 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/127.0.0.1, ResourceVersion: 0, AdditionalErrorMsg:
INFO[2019-05-14T13:46:48.350336300+01:00] Running kube-controller-manager --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --root-ca-file=/var/lib/rancher/k3s/server/tls/token-ca.crt --allocate-node-cidrs=true --port=10252 --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --cluster-cidr=10.42.0.0/16 --leader-elect=false
W0514 13:46:48.406005     209 authorization.go:47] Authorization is disabled
W0514 13:46:48.410530     209 authentication.go:55] Authentication is disabled
INFO[2019-05-14T13:46:48.537892400+01:00] Listening on :6443
INFO[2019-05-14T13:46:48.644296400+01:00] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz
INFO[2019-05-14T13:46:48.649606600+01:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
INFO[2019-05-14T13:46:48.651475700+01:00] Node token is available at /var/lib/rancher/k3s/server/node-token
INFO[2019-05-14T13:46:48.652992200+01:00] To join node to cluster: k3s agent -s https://192.168.0.29:6443 -t ${NODE_TOKEN}
INFO[2019-05-14T13:46:48.652585000+01:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml
INFO[2019-05-14T13:46:48.908769300+01:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
INFO[2019-05-14T13:46:48.909542800+01:00] Run: k3s kubectl
INFO[2019-05-14T13:46:48.910677100+01:00] k3s is up and running
WARN[2019-05-14T13:46:48.911812900+01:00] Failed to find cpuset cgroup, you may need to add "cgroup_enable=cpuset" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)
ERRO[2019-05-14T13:46:48.913161300+01:00] Failed to find memory cgroup, you may need to add "cgroup_memory=1 cgroup_enable=memory" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)
FATA[2019-05-14T13:46:48.914658500+01:00] failed to find memory cgroup, you may need to add "cgroup_memory=1 cgroup_enable=memory" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)

Additional context
Running Windows SubSystem for Linux with Ubunto 18.04

owsl2

Most helpful comment

@goffinf WSL2 should support k3s. We intend to support WSL2.

All 4 comments

Unfortunately since WSL does not support kernel modules I am not sure there is a lot we can do there. It might be worth trying to use docker with the --docker flag instead of the containerd run-time.

Thanks @erikwilson I feared as much.

I would say though, that it’s pretty common for large corporates to use WSL as a convenient implementation for running local K8s as well as connecting into public and private Cloud, I know we certainly do. As a corporate customer of Rancher one of the things I am tasked with is providing a relatively seamless desktop experience for local dev that closely mirrors that which we deploy in AWS. I had hoped k3s would provide that, although as you have suggested we are looking closely at k3d (as well as K3OS). Certainly listening to recent Rancher webinars on k3s and k3os, Darren and Shannon both recognised that they have seen a tremendous uptake for people wanting to use k3s/k3d for local dev, as well as the original intent to support edge devices. I think it would be unfortunate to not support WSL as a first class environment, although I appreciate that is not entirely within your gift. I’ll raise this up through Rancher support to give it some visibility (I understand it’s not an ‘official’ Rancher product yet so I’ll set my expectations appropriately).

One reason I was testing the k3s binary rather than docker was to set the bind-address. As you will know this defaults to localhost:6443 but, talking with the k3d devs this needs to be the ip of the VM otherwise our efforts to use products such as kubefwd and other work in-progress to provide ports for ingress, just won’t work. Can you tell me whether I can use bind-address for k3s (docker) and whether that is exposed in k3d ?

Regards

Fraser.

@goffinf WSL2 should support k3s. We intend to support WSL2.

Yeah I agree, my experiments with WSL and k3s and k3d just fall short right now. Looks like June.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

giezi picture giezi  Â·  3Comments

wpwoodjr picture wpwoodjr  Â·  3Comments

e-nikolov picture e-nikolov  Â·  3Comments

davidnuzik picture davidnuzik  Â·  3Comments

seanmalloy picture seanmalloy  Â·  3Comments