kubeadm assumes docker in its pre-flight checks

Created on 31 May 2017  路  12Comments  路  Source: kubernetes/kubeadm

kubeadm assumes docker in its pre-flight checks but we're currently working on integrating CRI-O in kubeadm init. I'd like to have an option to choose which container runtime to check against.
Idea? /cc @luxas

prioritbacklog

Most helpful comment

Please add cri-o to the list.

All 12 comments

@runcom This is very hard, I know. I've been thinking a little how to deal with this, but haven't proposed something really.

What I'd like to see is a way to do lifecycle tasks _in CRI_, and just let kubeadm delegate to that (auto-detect or not TBD). I don't see us building in specific assumptions for every possible provider, docker is still an exception generally in Kubernetes. I guess that will change with time as other solutions mature.

We assume docker in kubeadm reset as well.

Any movement on this? CRI-O is getting more mature, and we would like to point people to use kubeadm on it.

The check is here: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/reset.go#L106 if folks want to propose an alternative to vet, PRs welcome just tag me on review.

I think to make this work, we'd need to support rkt as an alternative container runtime. We'd probably need a few new config flags:

.ContainerRuntime=docker|rkt
.ContainerRuntimeExtraArgs=<optional>

ContainerRuntimeExtraArgs would allow users to set additional flags like --rkt-api-endpoint,--rkt-path, and --rkt-stage1-image on the kubelet .

We'd also need to change the preflight and teardown validation checks, as Tim mentioned, to take the different CRI binaries into account.

What do folks think? Would this be a nice-to-have for 1.9?

Please add cri-o to the list.

@rhatdan Yep, will do.

I'm torn. I don't want to start building in "support" for all possible CRI runtimes that might exist out there.
That is just bloating the kubeadm code. The kubeadm CLI binary doesn't touch the kubelet args as of v1.8.
_Might_ change with v1.9.

Note that this is all preflight checks. If docker is not present on host, we should just ignore docker preflight checks.

The docker-check is here: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/preflight/checks.go#L645

The "remove all running containers" code is here: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/reset.go#L108-L116

We'd have to figure out a generic way to perform these two tasks.

In order for me to integrate more with cri-o, cri-containerd, etc. I demand continuously running e2e tests. We have that for docker.
Also, I think someone from the respective teams wanting this feature should step up and have someone work with the kubeadm team to make this happen. == kind of what @timothysc was saying above that "tag us for review" not "we'll write this code for you" ;)

Fair enough?

Works for me. We could hopefully work on that in the future. But for now I would be happy if it does not fail if docker is not installed and running.

Does it really fail right now? It's a warning, right? https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/preflight/checks.go#L100

I'm happy to make the warning-if-not-exist behavior tunable per-service, so kubeadm won't warn if the docker service doesn't exist, but it will if the kubelet service isn't present.

That is like a 5-line change I'm definitely happy with; and that could even be backported to v1.8.x
Likewise, we could say that "You don't seem to use Docker; please tear down running containers (if any) using your container runtime of choice" instead of the current message: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/reset.go#L115

@luxas if kubelet available and via local API can return information about current parameters (like CRI socket location), can then kubeadm use native CRI API to list/remove all containers ? So it will be not docker specific, but any CRI API compliant ?

@rhatdan @runcom Does any of you have spare cycles to improve the situation here?

I'll start with https://github.com/kubernetes/kubeadm/issues/508 then and go from there and fixups all dockerisms :)

Was this page helpful?
0 / 5 - 0 ratings