Kubeadm: add support for the OpenRC as init system

Created on 3 Dec 2018  路  55Comments  路  Source: kubernetes/kubeadm

EDIT by neolit123:

the init system is already supported yet kubeadm still is assuming systemd in paths and messages:
see:
https://github.com/kubernetes/kubeadm/issues/1295#issuecomment-491443917

also see this workaround:
https://github.com/kubernetes/kubeadm/issues/1295#issuecomment-474318713


BUG REPORT

Looks like alpine linux init system isn't supported by kubeadm.
It seems to write messages about this and continue on, but I assume it doesn't configure a service,
thus it never starts, and can't finish.

Would be awesome if we could host a kubernetes cluster on alpine.

Versions

kubeadm version (use kubeadm version):

kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"archive", BuildDate:"2018-11-15T16:26:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"archive", BuildDate:"2018-11-15T16:26:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • Cloud provider or hardware configuration:
    HyperV on windows

  • OS (e.g. from /etc/os-release):
    NAME="Alpine Linux"
    ID=alpine
    VERSION_ID=3.8.1
    PRETTY_NAME="Alpine Linux v3.8"
    HOME_URL="http://alpinelinux.org"
    BUG_REPORT_URL="http://bugs.alpinelinux.org"

  • Kernel (e.g. uname -a):
    Linux kubemanager1 4.14.84-0-virt #1-Alpine SMP Thu Nov 29 10:58:53 UTC 2018 x86_64 Linux

  • Others:

What happened?

kubeadm init failed to start a kubelet thus failed to run

What you expected to happen?

kubeadm to init correctly

How to reproduce it (as minimally and precisely as possible)?

kubeadm init

Anything else we need to know?

docker ps -a returns nothing. No container was ever started

kubeadm init
[init] using Kubernetes version: v1.12.3
[preflight] running pre-flight checks
[WARNING Firewalld]: no supported init system detected, skipping checking for services
[WARNING HTTPProxy]: Connection to "https://10.1.1.20" uses proxy "http://10.1.1.1:3128". If that is not intended, adjust your proxy settings
[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://10.1.1.1:3128". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING Service-Docker]: no supported init system detected, skipping checking for services
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] no supported init system detected, won't make sure the kubelet not running for a short period of time while setting up configuration for it.
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] no supported init system detected, won't make sure the kubelet is running properly.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubemanager1 localhost] and IPs [10.1.1.20 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubemanager1 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubemanager1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.1.20]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

areecosystem help wanted kinfeature lifecyclfrozen prioritbacklog sinode

Most helpful comment

@xphoniex they have been merged
.: Francesco

All 55 comments

Please first fix warnings that kubeadm is providing to you. E.g. start with defining proper value for NO_PROXY environment variable, then make sure that all needed binaries are present on the system (tc,ebtables,...) and then check what is in kubelet's status and logs.

/assign

With all warnings apart from not having a supported init system detected. Still has the same issue.

kubeadm init
I1204 10:42:06.894219 7292 version.go:236] remote version is much newer: v1.13.0; falling back to: stable-1.12
[init] using Kubernetes version: v1.12.3
[preflight] running pre-flight checks
[WARNING Firewalld]: no supported init system detected, skipping checking for services
[WARNING Service-Docker]: no supported init system detected, skipping checking for services
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] no supported init system detected, won't make sure the kubelet not running for a short period of time while setting up configuration for it.
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] no supported init system detected, won't make sure the kubelet is running properly.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Using the existing sa key.
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

Also not supporting the init system(openrc) is totally understandable, maybe an improvement here is just some documentation of the supported init systems (or just saying that it only supports systemd if that's the case)

Can you share what is in the logs for kubelet and in docker containers (if any are running after kubeadm error messages)

Hi kad, as far as I can tell there is no kubelet process running, and no containers are ever started.

I know little about kubeadms internals, but it appears it wants to configure a service at the beginning (eg systemd), can't find a supported init system, so skips it, but later on is waiting for that init system to have started the kubelet.

ps
PID USER TIME COMMAND
1 root 0:00 /sbin/init
2 root 0:00 [kthreadd]
4 root 0:00 [kworker/0:0H]
5 root 0:00 [kworker/u64:0]
6 root 0:00 [mm_percpu_wq]
7 root 0:00 [ksoftirqd/0]
8 root 0:00 [rcu_sched]
9 root 0:00 [rcu_bh]
10 root 0:00 [migration/0]
11 root 0:00 [watchdog/0]
12 root 0:00 [cpuhp/0]
13 root 0:00 [kdevtmpfs]
14 root 0:00 [netns]
16 root 0:00 [oom_reaper]
174 root 0:00 [writeback]
175 root 0:00 [kworker/0:1]
176 root 0:00 [kcompactd0]
178 root 0:00 [ksmd]
179 root 0:00 [crypto]
180 root 0:00 [kintegrityd]
182 root 0:00 [kblockd]
445 root 0:00 [ata_sff]
454 root 0:00 [md]
460 root 0:00 [watchdogd]
585 root 0:00 [kauditd]
591 root 0:00 [kswapd0]
679 root 0:00 [kthrotld]
911 root 0:00 [hv_vmbus_con]
1182 root 0:00 [scsi_eh_0]
1255 root 0:00 [scsi_tmf_0]
1264 root 0:00 [kworker/u64:3]
1406 root 0:00 [jbd2/sda3-8]
1407 root 0:00 [ext4-rsv-conver]
1821 root 0:00 [hv_balloon]
1874 root 0:00 [ipv6_addrconf]
1965 root 0:00 [kworker/0:1H]
2235 root 0:00 /sbin/syslogd -Z
2289 root 0:00 /sbin/acpid
2318 chrony 0:00 /usr/sbin/chronyd -f /etc/chrony/chrony.conf
2345 root 0:00 /usr/sbin/crond -c /etc/crontabs
2447 root 0:06 /usr/bin/dockerd -p /run/docker.pid
2480 root 0:00 /usr/sbin/sshd
2485 root 0:00 /sbin/getty 38400 tty1
2486 root 0:00 /sbin/getty 38400 tty2
2489 root 0:00 /sbin/getty 38400 tty3
2491 root 0:00 /sbin/getty 38400 tty4
2495 root 0:00 /sbin/getty 38400 tty5
2498 root 0:00 /sbin/getty 38400 tty6
2507 root 0:00 sshd: root@pts/0
2509 root 0:00 -ash
2514 root 0:00 docker-containerd --config /var/run/docker/containerd/containerd.toml
2964 root 0:00 [kworker/0:0]
3064 root 0:00 sshd: root@pts/1
3066 root 0:00 -ash
3241 root 0:00 [kworker/u64:1]
3311 root 0:00 [kworker/0:2]
3314 root 0:00 /sbin/getty -L 115200 ttyS0 vt100
3315 root 0:00 ps

docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

only support systemd and wininit init system. you could install kubelet manually and delete the code installing kubelet configuration file in kubeadm, mybe it works

is alpine linux a target for us?
we might rely on the community to patch it.

is alpine linux a target for us?
we might rely on the community to patch it.

Alpine Linux is a very popular target for containers - due to it's extremely small size/install - also rather popular for Vagrant/EC2 - I'm surprised it's not supported. Grepped through the kubedm code - seems like it's just messing with systemd in order to start docker/kubernetes stuff.

Is there a document describing what kubeadm does / intends to do / depends upon from the init system?

Is there a document describing what kubeadm does / intends to do / depends upon from the init system

on Linux it uses systemd to start / stop the kubelet:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/kubelet/kubelet.go

this document partially outlines the kubeadm / systemd interaction:
https://kubernetes.io/docs/setup/independent/kubelet-integration/#configure-kubelets-using-kubeadm

https://wiki.alpinelinux.org/wiki/Alpine_Linux_Init_System

Alpine Linux uses OpenRC for its init system.

this init system is not supported by core kubernetes.
in this case kubeadm uses what is available in the core.

[WARNING Service-Docker]: no supported init system detected, skipping checking for services
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services

this comes from here:
https://github.com/kubernetes/kubernetes/blob/master/pkg/util/initsystem/initsystem.go#L178
and new init systems have to be added there.

/assign @timothysc @detiber
for judgement on this one.

Has someone already packaged the necessary binaries (and the required init scripts) for Alpine? If so, I don't see an issue with adding proper support for managing services correctly. If not, then I would consider that a prerequisite for this to proceed, since the management of init scripts/config isn't the responsibility of kubeadm.

There seems to be a single kubernetes package here.

@rosti Looking at the contents of that package it basically looks like a dump of multiple k8s binaries and does not include an init script or config required to be driven by kubeadm.

I'm normally a lurker. But there's industry interest in Kubernetes on the Edge using ARM and various bare metal options are being investigated with Alpine being in the mix of OS choices.

I think OpenRC support in kubeadm is kind of a must-have, I'm not certain Alpine's community is going to put forward a patch that 'fixes' something so fundamental to the OS's claim to fame.

I'm normally a lurker. But there's industry interest in Kubernetes on the Edge using ARM and various bare metal options are being investigated with Alpine being in the mix of OS choices.

I think OpenRC support in kubeadm is kind of a must-have, I'm not certain Alpine's community is going to put forward a patch that 'fixes' something so fundamental to the OS's claim to fame.

I strongly suspect you are correct - with the memory/image size they're targeting - I really can't see them going the (no disrespect to) systemd route.

Is there a document describing what kubeadm does / intends to do / depends upon from the init system

on Linux it uses systemd to start / stop the kubelet:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/kubelet/kubelet.go

this document partially outlines the kubeadm / systemd interaction:
https://kubernetes.io/docs/setup/independent/kubelet-integration/#configure-kubelets-using-kubeadm

https://wiki.alpinelinux.org/wiki/Alpine_Linux_Init_System

Alpine Linux uses OpenRC for its init system.

this init system is not supported by core kubernetes.
in this case kubeadm uses what is available in the core.

[WARNING Service-Docker]: no supported init system detected, skipping checking for services
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services

this comes from here:
https://github.com/kubernetes/kubernetes/blob/master/pkg/util/initsystem/initsystem.go#L178
and new init systems have to be added there.

At first glance, this actually looks quite straight-forward. I'm not a Go-afficionado by any means, but it appears to just be making direct calls to a shell. Adding another implementor to that InitSystem interface that works for OpenRC plus the openrc service script would probably do it.

EDIT:
Diving down in, getting Kubernetes onto Alpine-ARM is going to require some work. Running kubelet manually is possible, but after significant time debugging I suspect that there's a networking issue afoot as the apiserver is failing to sync with etcd when doing a basic init with kubeadm.

@detiber you are correct there. But some package is better than no package. This means that we have a maintainer we can ping with a specific proposal.

@bcdurden

this comes from here:
https://github.com/kubernetes/kubernetes/blob/master/pkg/util/initsystem/initsystem.go#L178
and new init systems have to be added there.

At first glance, this actually looks quite straight-forward. I'm not a Go-afficionado by any means, but it appears to just be making direct calls to a shell. Adding another implementor to that InitSystem interface that works for OpenRC plus the openrc service script would probably do it.

yes, contributions are welcome - i.e. this effort is community driven. whoever sends a PR for that please ping me.

RE: packages.

ping @fcolista
as per: https://pkgs.alpinelinux.org/package/edge/testing/x86_64/kubernetes

are there plans to continue to maintain this package?
it seems to lack init system integration and only include binaries.

we are having a discussion about the potential support for Alpine Linux in kubeadm.
the main blocker here is not really on the kubeadm side but rather core kubernetes not having OpenRC support (see comment above).

RE: packages.

ping @fcolista
as per: https://pkgs.alpinelinux.org/package/edge/testing/x86_64/kubernetes

are there plans to continue to maintain this package?
it seems to lack init system integration and only include binaries.

we are having a discussion about the potential support for Alpine Linux in kubeadm.
the main blocker here is not really on the kubeadm side but rather core kubernetes not having OpenRC support (see comment above).

FYI, there's a corresponding kubernetes-cni package as well done by the same contributor; but it has the same problems (no init or setup of the artifacts in the apk).

RE: packages.

ping @fcolista
as per: https://pkgs.alpinelinux.org/package/edge/testing/x86_64/kubernetes

are there plans to continue to maintain this package?
it seems to lack init system integration and only include binaries.

we are having a discussion about the potential support for Alpine Linux in kubeadm.
the main blocker here is not really on the kubeadm side but rather core kubernetes not having OpenRC support (see comment above).

@neolit123, @bcdurden hi.
Feel free to open a PR to https://github.com/alpinelinux/aports/ with an openrc-run init.
I've tried to build kubernetes 1.13.2 but the build goes out-of-memory at the moment.
I don't use kubernetes so if you have any hint on how to quickly test a possible init that would be much appreciated, i can work on it. At the same time, the package is in "testing" eactly for the reason that is not ready for production.
Thanks!

@fcolista I've got one, but it's 'hacky'. Something about Alpine on ARM64 with Kubelet either blocks etcd from completing or doesn't wait long enough. I have to start kubelet, kill the process after about 20-30s, and then start it back up again. It doesn't appear to be waiting long enough to allow apiserver to come online and create/sync resources (takes about 2minutes total). This hackiness makes using kubeadm init and join pretty dicey. Haven't had a chance to track down the issue. This is Alpine 3.8 on ARM64 running on RPi 3B+

I've also found another issue with kubeadm join having issues with BusyBox's 'find' binary missing the -printf flag.

EDIT: Forgot to add, this is using the Kubernetes general release binaries for ARM64. I'm definitely not compiling it (Go is only an Edge package for 3.8 in ARM64 land)

@bcdurden that's very interesting. Could be that etcd is not spinning up quickly enough and the API server goes into a crash loop back-off (similar to kubernetes/kubernetes#72984).
Can we see the result of docker ps -a and some of the kubelet logs?

@bcdurden that's very interesting. Could be that etcd is not spinning up quickly enough and the API server goes into a crash loop back-off (similar to kubernetes/kubernetes#72984).
Can we see the result of docker ps -a and some of the kubelet logs?

@rosti It's quite possible, though I do recall it appears that etcd is running and actively listening for requests. It's just not receiving them or processing them? There's a point where apiserver says 'Loading controllers' and lists the various types that it supports etc. And then never gets to the point where it properly syncs resources. Eventually it presents a failure message that describes it being unable to sync resources. Kubelet then restarts apiserver and tries again (an endless loop at this point).

I'm currently on business travel, but I will try and get the logs in here as soon as I can.

Until this gets resolved, I have created this https://github.com/oz123/systemctl-shim/ to translate systemctl commands to openrc.

we are currently in code freeze for 1.14, but the https://github.com/kubernetes/kubernetes/pull/73101 PR seems like a clean merge to me. it's blocked on pkg/util/initsystem/ OWNERS and hopefully we can merge it fairly soon.

^ pinged people about the above PR.

@oz123 @btrepp et al,
LMK if merging https://github.com/kubernetes/kubernetes/pull/73101 unblocks alpine users.

@neolit123 thanks for taking care. I'm going to continue the work now. This is awesome!

@oz123 could you please outline what else has to be done?

@neolit123 one needs to check that failures on OpenRC don't output the message :

If you are on a systemd-powered system, you can try ....

see for example kubeletFailTempl.

Also, kubelet flags are written to in /etc/ places which don't make sense for OpenRC and are only relevant for systemd. So this needs to be modular too ...
I didn't I can sumbit a large merge request to solve all the issues at once. I intend to split the work in about 2 to 3 more PRs. So the issue should be re-opened, or we can track this in another issue.

ok, thanks for refreshing my memory. i'm starting to remember what had to be done.

Also, kubelet flags are written to in /etc/ places which don't make sense for OpenRC and are only relevant for systemd. So this needs to be modular too ...

preferably the changes should be well abstracted on runtime and not very intrusive to the already existing ways of handling the init system. that's mainly due to systemd being the main target.

i'd start with an utility function that detects openrc possibly under cmd/kubeadm/app/util/initsystem
it should have a unit test and be no-op on windows.

just ping me on any PRs.

@neolit123 detection of init system is done properly already in
cmd/kubeadm/app/phases/kubelet/kubelet.go, so I don't think there is a need for another file.

hm, it will abstract the service start/stop process, but kubeadm would still read/write some files such as the dynamic drop-in file under /etc. the rest of the code would need to know about openrc too:
e.g.
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/kubelet/flags.go#L120

The kubelet drop-in file for systemd
https://kubernetes.io/docs/setup/independent/kubelet-integration/#the-kubelet-drop-in-file-for-systemd

Exactly, I said above that kubeadm writes stuff on the file system in places which are systemd relevant only.
Ok then, I will do that work needed as you suggested in cmd/kubeadm/app/util/initsystem.

https://github.com/oz123/kubernetes/blob/2a40ef473f906b6a165690480dc000b9e5560258/pkg/util/initsystem/initsystem.go
^ if this included a method to return the enum type of the detected init system it would have been even cleaner, but as you saw merging PRs there might take a while...

Should this be marked lifecycle/active or unmarked help wanted? It doesn't seem ready for anyone to pick up, since it's already being worked on.

marked as active as @oz123 mentioned that he can look at followup changes.

Since above PR mentions partial support, what is currently missing for full support @oz123 ?

@mrueg what's missing is almost everything we discussed in this thread. I currently lack the time to complete the work, if someone would like to sponsor it feel free to contact me. If another person wants to take over this work I am also fine with that.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/lifecycle frozen

this is a pending openrc issue: https://github.com/kubernetes/kubeadm/issues/1986

unable to run on alpine as well :(

I am using k8s on Alpine (a single cluster running on x86_64, armv7 and aarch64) as a work around when joining a node to a cluster I manually restart kubelet when it fails, seems to only be needed to be done once.

I managed to start a cluster the other day with some help from @neolit123 .

Furthermore, to complete this I need some co-operation from @fcolista to fix the alpine side of things. I already contacted you but got no response.

Otherwise everything from kubeadm side works fine now and this issue can be closed once my PR is merged.

@xphoniex I'm avail to help.
I prefer that your patch is applied upstream...at the moment Alpine version is stucked at 1.17.3, since 1.18 does not build with go 1.13.
Let me know what kind of help/cooperation you need from my side.
Thanks!

@xphoniex they have been merged
.: Francesco

we can close this too now that https://github.com/kubernetes/kubernetes/pull/90892 is merged @neolit123 , yes?

i think this is the final remaining item:
https://github.com/kubernetes/kubeadm/issues/1295#issuecomment-491446853

Alpine already uses /etc/ for services, so we kept the config files there too.

We only had to update the flags in kubelet.confd and kubelet.initd for kubernetes package to let OpenRC know where the rest of config files were, you can see the diff here.

Notice for example that we made --cni-bin-dir=/usr/share/cni-plugins/bin as per Francesco's suggestion whereas on other distros we expect the binaries to be in /opt/cni/bin.

understood, this is great news and i'm going to close this ticket (finally).
/close

a couple of FYI WRT service files:

  • update to the 10-kubeadm.conf file is imminent at this point, yet not clear when, maybe in +3, maybe +5 releases.
    the kubelet is removing all it's flags in favor of using configuration file values via --config. when this happens we are going to stop sourcing kubeadm-flags.env and /etc/default/kubelet in 10-kubeadm.conf and kubeadm will stop generating kubeadm-flags.env on runtime.

  • dockershim which is the CRI implementation for docker is moving outside of the kubelet source code into a separate git repository and a separate service. so docker users will have to run it separately before starting the kubelet service. unclear what is the userbase for docker on alpine, but overall docker for kubeadm users is 70% as per a survey we did a couple of years ago.

@neolit123: Closing this issue.

In response to this:

understood, this is great news and i'm going to close this ticket (finally).
/close

a couple of FYI WRT service files:

  • update to the 10-kubeadm.conf file is imminent at this point, yet not clear when, maybe in +3, maybe +5 releases.
    the kubelet is removing all it's flags in favor of using configuration file values via --config. when this happens we are going to stop sourcing kubeadm-flags.env and /etc/default/kubelet in 10-kubeadm.conf and kubeadm will stop generating kubeadm-flags.env on runtime.

  • dockershim which is the CRI implementation for docker is moving outside of the kubelet source code into a separate git repository and a separate service. so docker users will have to run it separately before starting the kubelet service. unclear what is the userbase for docker on alpine, but overall docker for kubeadm users is 70% as per a survey we did a couple of years ago.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings