K3s: consider balena-engine support

Created on 27 Feb 2019  路  17Comments  路  Source: k3s-io/k3s

Hey - fair play to you, this is a really great project that can change a lot in iot space.

Have you considered adding support for balena-engine? https://github.com/balena-os/balena-engine

I've been playing with it for a while on arm64 and got really good results, it's compatible with docker repos and images still, while footprint is reduced.

kinquestion

Most helpful comment

oh my God I've never been so close to running kube on my pi zero :D

All 17 comments

balena-engine in theory should work already, we just don't include by default. You would just need to install balena and then run the k3s agent with the --docker flag so that docker and not containerd is used. You might need to symlink /var/run/docker.sock to the balena socket file.

oh my God I've never been so close to running kube on my pi zero :D

@sokoow pi zero won't work. I'm sorry. k3s will run but there is pretty much no support for armv6 in the community. What you will hit is that the google pause container will just segfault. I'd love to get this working, but it was too much to bite off. armv7 was a struggle, armv6 seems insurmountable.

Did you recompile everything on armv6l and it was still segfaulting ?

Okay, I gave it a try to recompile pause container on armv6l - this part looks good. I also had a try on replacing containerd socket with balena-engine one, and keep getting this:

```# ./k3s-armhf server
INFO[2019-03-03T09:25:05.482508607Z] Starting k3s a3599a16b5-dirty (a3599a16)
INFO[2019-03-03T09:25:05.536685503Z] Running kube-apiserver --watch-cache=false --cert-dir /var/lib/rancher/k3s/server/tls/temporary-certs --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key
INFO[2019-03-03T09:25:09.608253570Z] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false
INFO[2019-03-03T09:25:09.624037088Z] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false
2019/03/03 09:25:21 http: TLS handshake error from 127.0.0.1:50084: EOF
F0303 09:25:21.637584 10412 controller.go:142] Unable to perform initial IP allocation check: unable to refresh the service IP block: Get https://127.0.0.1:6444/api/v1/services: net/http: TLS handshake timeout
goroutine 318 [running]:
github.com/rancher/k3s/vendor/k8s.io/klog.stacks(0x4b36600, 0x0, 0xd4, 0x143)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:828 +0x94
github.com/rancher/k3s/vendor/k8s.io/klog.(loggingT).output(0x4b242f8, 0x3, 0x77efa40, 0x48cb57f, 0xd, 0x8e, 0x0)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:779 +0x2c0
github.com/rancher/k3s/vendor/k8s.io/klog.(
loggingT).printf(0x4b242f8, 0x3, 0x2665f9a, 0x31, 0x75b5f74, 0x1, 0x1)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:678 +0x110
github.com/rancher/k3s/vendor/k8s.io/klog.Fatalf(...)
/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:1207
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/master.(Controller).Start(0x68fe600)
/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/master/controller.go:142 +0x14c
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/master.(
Controller).PostStartHook(...)
/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/master/controller.go:120
github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server.runPostStartHook.func1(0x6ad2090, 0x6ee05c0, 0x6b7a000, 0x6d727c0, 0x75b5fbc)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/hooks.go:184 +0x5c
github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server.runPostStartHook(0x25f0d42, 0x14, 0x6ad2090, 0x6ee05c0, 0x6b7a000, 0x6d727c0)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/hooks.go:185 +0x4c
created by github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks
/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/hooks.go:151 +0xd4
2019/03/03 09:25:21 http: TLS handshake error from 127.0.0.1:50098: EOF
2019/03/03 09:25:22 http: TLS handshake error from 127.0.0.1:50060: EOF
panic: creating CRD store Get https://localhost:6444/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions: net/http: TLS handshake timeout

goroutine 543 [running]:
github.com/rancher/k3s/vendor/github.com/rancher/norman/store/crd.(Factory).BatchCreateCRDs.func1(0x7f33680, 0x720d2e0, 0x3, 0x3, 0x6ccd950, 0x492c268, 0x2d13dc0, 0x720d500, 0x0, 0x0)
/go/src/github.com/rancher/k3s/vendor/github.com/rancher/norman/store/crd/init.go:65 +0x210
created by github.com/rancher/k3s/vendor/github.com/rancher/norman/store/crd.(
Factory).BatchCreateCRDs
/go/src/github.com/rancher/k3s/vendor/github.com/rancher/norman/store/crd/init.go:50 +0x8c
```

You can have a look at my changes in this branch: https://github.com/sokoow/k3s/tree/remove-containerd-fix-pause . I'll probably need some more guidance to make it work, as I literally just jumped on it.

Looking at k3s, it seems to be quite containerd tangled, after me rebinding it to balena-engine, would it be possible to drop containerd part, or at least stop it to save resources ? The reason why I want to explore balena-engine is that I've been working with it on iot for quite a while and the footprint is really promising. Saying that k3s required 512MB of mem won't probably change much in IoT, but saying that it requires 120MB and less wlil, that's why I want to give it a try. Let me know what you think pls, and any help appreciated.

https://github.com/kubernetes/kubeadm/issues/253#issuecomment-390467222

Maybe this will make k3s supports our lovely pi zero?

I got it working - so the pause container works when recompiled directly on the pi zero, will try on qemu later too. It's launching a pod with bash currently, so that's a bit of progress. A bit unhappy about the overhead though, cause it takes 240MB of mem just to start, and /var/lib/rancher quickly grows to over 1,5GB, just wondering why ?

@sokoow 1.5gb for /var/lib/rancher is quite high, that doesn't seem quite right unless maybe its using the VFS driver for containerd. Do you know where all the space is going? In general k3s will take ~250mb if both the server and node are on the same device. If you are just running the agent it should be more like 70mb.

containerd should be fully optional. Just use the --docker or --container-runtime-endpoint flags. Let me know if balena has a lower memory footprint than containerd. I'd really be surprised if it does.

If anyone has issues using k3s with balena, please open a new issue.

If anyone has issues using k3s with balena, please open a new issue.

If anyone has issues using k3s with balena, please open a new issue.

If anyone has issues using k3s with balena, please open a new issue.

If anyone has issues using k3s with balena, please open a new issue.

If anyone has issues using k3s with balena, please open a new issue.

If anyone has issues using k3s with balena, please open a new issue.

Sorry for the spam, GH was acting up and not letting me comment earlier!

Do you have k3s running on armv6 and the zero?

Was this page helpful?
0 / 5 - 0 ratings