What happened:
I'm trying to create clusters using kind 0.5.1 and different node images, using the list from https://github.com/kubernetes-sigs/kind/releases/tag/v0.5.0, as there is none for 0.5.1. This works fine for kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157 and kindest/node:v1.14.6@sha256:464a43f5cf6ad442f100b0ca881a3acae37af069d5f96849c1d06ced2870888d but fails for the other images.
The output of the command line is:
> kind create cluster --name kind-11 --image kindest/node:v1.11.10@sha256:bb22258625199ba5e47fb17a8a8a7601e536cd03456b42c1ee32672302b1f909 --retain
Creating cluster "kind-11" ...
โ Ensuring node image (kindest/node:v1.11.10) ๐ผ
โ Preparing nodes ๐ฆ
โ Creating kubeadm config ๐
โ Starting control-plane ๐น๏ธ
Error: failed to create cluster: failed to init node with kubeadm: exit status 1
What you expected to happen:
That the clusters get created cleanly.
How to reproduce it (as minimally and precisely as possible):
kind create cluster --name kind-11 --image kindest/node:v1.11.10@sha256:bb22258625199ba5e47fb17a8a8a7601e536cd03456b42c1ee32672302b1f909 --retain
kind create cluster --name kind-12 --image kindest/node:v1.12.10@sha256:e43003c6714cc5a9ba7cf1137df3a3b52ada5c3f2c77f8c94a4d73c82b64f6f3 --retain
kind create cluster --name kind-13 --image kindest/node:v1.13.10@sha256:2f5f882a6d0527a2284d29042f3a6a07402e1699d792d0d5a9b9a48ef155fa2a --retain
Anything else we need to know?:
The logs for a failed cluster creation:
kind-13.zip
Environment:
kind version): v0.5.1kubectl version): v1.15.3docker info): 17.05.0-ce/etc/os-release): RHEL 7.6/assign
FWIW on a linux box (~debian testing):
$ kind create cluster --name kind-11 --image kindest/node:v1.11.10@sha256:bb22258625199ba5e47fb17a8a8a7601e536cd03456b42c1ee32672302b1f909 --retain
Creating cluster "kind-11" ...
โ Ensuring node image (kindest/node:v1.11.10) ๐ผ
โ Preparing nodes ๐ฆ
โ Creating kubeadm config ๐
โ Starting control-plane ๐น๏ธ
โ Installing CNI ๐
โ Installing StorageClass ๐พ
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind-11")"
kubectl cluster-info
this looks like https://github.com/kubernetes/kubernetes/issues/43856 (from the logs you uploaded), I'm not sure what changed in 1.14+ yet though.
same result on RHEL 7.7
@Hendrik-H have you tried using a newer version of docker?
from the logs seems you are using Server Version: 17.05.0-ce
no, I can only test with the version that I get with RHEL
Kubernetes 1.13 is out of support upstream now, and I'm pretty sure this docker version isn't supported upstream either. I don't have RHEL, but hopefully the next RHEL release will ship something newer or we'll figure out podman..
Given that upstream supported Kubernetes versions work, I don't think we're likely to pursue this versus other work.
Most helpful comment
no, I can only test with the version that I get with RHEL