K3s: Use-case?

Created on 27 Sep 2018  路  10Comments  路  Source: k3s-io/k3s

Hi :) I can imagine several potential directions for this, but I'm curious of your actual use-case. Is there a concrete target here?

Most helpful comment

The story is basically this: I love the architecture of Kubernetes. Professionally I've been writing and running orchestration system for the past decade or so. Kubernetes is by far the best architecture I've found and has the highest quality code. BUT, I do not like using or running Kubernetes. Kubernetes as an end user facing system is pretty rough. This project is really intended to create a smaller "library" version of Kubernetes so that I can embed it into another system that presents a more tenable UX. Currently you can see how this is done in Rio standalone mode.

If you want to embed this in your application you just need to use the cmd/server package (which is poorly named at the moment because it's not a command) https://github.com/ibuildthecloud/k3s/blob/master/cmd/server/server.go and then for the agent use cmd/agent https://github.com/ibuildthecloud/k3s/blob/master/cmd/agent/agent.go.

There's a lot more to come from this, but you probably won't see anything until around DockerCon EU 2018 time frame.

All 10 comments

The story is basically this: I love the architecture of Kubernetes. Professionally I've been writing and running orchestration system for the past decade or so. Kubernetes is by far the best architecture I've found and has the highest quality code. BUT, I do not like using or running Kubernetes. Kubernetes as an end user facing system is pretty rough. This project is really intended to create a smaller "library" version of Kubernetes so that I can embed it into another system that presents a more tenable UX. Currently you can see how this is done in Rio standalone mode.

If you want to embed this in your application you just need to use the cmd/server package (which is poorly named at the moment because it's not a command) https://github.com/ibuildthecloud/k3s/blob/master/cmd/server/server.go and then for the agent use cmd/agent https://github.com/ibuildthecloud/k3s/blob/master/cmd/agent/agent.go.

There's a lot more to come from this, but you probably won't see anything until around DockerCon EU 2018 time frame.

I can imagine this being a lot lighter and simpler to run on Pis at home. I installed k8s master on an RPi 3B, and memory wise there's only 300mb overhead left. I will test to see what the difference is at some point this week.

I tried to run Nomad + Consul on some Rock64 boards, but the boards seem to get super hot with the constant Consul health checking and seem to overheat and turn off...

I might have to try this instead... I assume it runs fine on ARM boards such as a Pi like @hoshsadiq says?

I can fully understand its use cases, such as using the basic features of K8S (eliminating indifferent features to reduce resource consumption) in scenarios where hardware resources are scarce.

In fact, my team made an early attempt (base on v1.5.2) to eventually built the executable file to around 40M with functional tailoring and all in one build. But we didn't go any further because of the rapid iteration of K8S features and the huge amount of merging effort that would be introduced after each major release.

But for specific scenarios that don't care about new functions of K8S, this is enough to solve the problem. And this project is done more thoroughly.

So in my mind, this is an awesome project and I like it! :+1:

Shouldn't it be possible to compile custom stripped-down versions of Kubernetes from the upstream codebase without having to fork out to an entirely separate rewrite? I admittedly don't know that much about Go, but I'd expect it to have something like C's #ifdef blocks that'd let you build only certain functionality into the final binary (like how NodeMCU builds modules into appropriate firmware configurations).

The closest thing to C's #ifdef and the like is Go's build constraints, however, they are based on whole files so I'm not quite sure if that's very useful as there are plenty of imports and references to the excluded code so it wouldn't work.

Do I get this totally wrong?
The vanilla hypekube image size is like:

gcr.io/google-containers/hyperkube   v1.13.0-beta.0      3f0b32e8fd75        3 hours ago         562MB

While in Rancher 2.1.1,

rancher/hyperkube        v1.12.0-rancher1    9a1178756ab9        5 weeks ago         992MB

And rancher has removed a lot of features, I love to see GPU and other accelerators supported!

The use case for k3s going forward is basically k8s for small clusters (more info in the README). This project has gotten enough attention that it's going to move to be an officially support Rancher Labs project.

@liyimeng rancher/hyperkube is used by RKE (Ranchers Kubernetes installer) and is larger than upstream hyperkube because it includes more utilties for iscsi, xfs, azure cli. The dockerfile is at https://github.com/rancher/hyperkube/blob/v1.13/Dockerfile you can see it is based on the upstream hyperkube. Having said that, RKE and k3s are completely different. GPU and accelerators are supported now in k3s, let me know if it doesn't work.

This project has gotten enough attention that it's going to move to be an officially support Rancher Labs project.

That's great to hear! Thanks for all the effort.

Thanks @ibuildthecloud!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

wpwoodjr picture wpwoodjr  路  3Comments

jgreat picture jgreat  路  3Comments

pierreozoux picture pierreozoux  路  4Comments

davidnuzik picture davidnuzik  路  3Comments

theonewolf picture theonewolf  路  3Comments