Kind: node-labels not being applied on the control-plane

Created on 10 Jul 2020  Â·  8Comments  Â·  Source: kubernetes-sigs/kind

What happened:
When using the node-labels to open ports for ingress-nginx the control-plane node is not being labelled.

What you expected to happen:

I expected to see the label ingress-ready=true in one of my control-plane nodes

How to reproduce it (as minimally and precisely as possible):

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
  kubeadmConfigPatches:
    - |
      kind: InitConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 30080
    hostPort: 80
    protocol: TCP
  - containerPort: 30443
    hostPort: 443
    protocol: TCP
- role: worker
- role: worker
- role: worker
- role: worker
docker ps | grep 443
b177a58b6f6c        kindest/node:v1.18.2           "/usr/local/bin/entr…"   9 minutes ago       Up 8 minutes        127.0.0.1:41895->6443/tcp, 0.0.0.0:80->30080/tcp, 0.0.0.0:443->30443/tcp   hprudent-control-plane3
Name:               hprudent-control-plane3
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=hprudent-control-plane3
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 10 Jul 2020 17:28:52 +0200
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  hprudent-control-plane3
  AcquireTime:     <unset>
  RenewTime:       Fri, 10 Jul 2020 17:34:48 +0200
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 10 Jul 2020 17:34:32 +0200   Fri, 10 Jul 2020 17:28:52 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 10 Jul 2020 17:34:32 +0200   Fri, 10 Jul 2020 17:28:52 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 10 Jul 2020 17:34:32 +0200   Fri, 10 Jul 2020 17:28:52 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 10 Jul 2020 17:34:32 +0200   Fri, 10 Jul 2020 17:29:32 +0200   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.18.0.3
  Hostname:    hprudent-control-plane3
Capacity:
  cpu:                6
  ephemeral-storage:  1442953720Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             20501776Ki
  pods:               110
Allocatable:
  cpu:                6
  ephemeral-storage:  1442953720Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             20501776Ki
  pods:               110
System Info:
  Machine ID:                 b8136a6528854b4485a85e973f2dd8b0
  System UUID:                d80db587-2ef1-47fd-b7e7-e53352b92746
  Boot ID:                    e2e50ea9-0e00-4e1a-9c2a-47077d6fc5f4
  Kernel Version:             5.4.0-40-generic
  OS Image:                   Ubuntu 19.10
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.3.3-14-g449e9269
  Kubelet Version:            v1.18.2
  Kube-Proxy Version:         v1.18.2
PodCIDR:                      10.244.2.0/24
PodCIDRs:                     10.244.2.0/24
Non-terminated Pods:          (6 in total)
  Namespace                   Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                               ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-hprudent-control-plane3                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
  kube-system                 kindnet-vp4r9                                      100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m5s
  kube-system                 kube-apiserver-hprudent-control-plane3             250m (4%)     0 (0%)      0 (0%)           0 (0%)         5m3s
  kube-system                 kube-controller-manager-hprudent-control-plane3    200m (3%)     0 (0%)      0 (0%)           0 (0%)         5m12s
  kube-system                 kube-proxy-hjlwj                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
  kube-system                 kube-scheduler-hprudent-control-plane3             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m52s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (10%)  100m (1%)
  memory             50Mi (0%)   50Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason    Age    From                                 Message
  ----    ------    ----   ----                                 -------
  Normal  Starting  5m40s  kube-proxy, hprudent-control-plane3  Starting kube-proxy.
kubectl get nodes --show-labels
NAME                      STATUS   ROLES    AGE     VERSION   LABELS
hprudent-control-plane    Ready    master   11m     v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-control-plane,kubernetes.io/os=linux,node-role.kubernetes.io/master=
hprudent-control-plane2   Ready    master   10m     v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-control-plane2,kubernetes.io/os=linux,node-role.kubernetes.io/master=
hprudent-control-plane3   Ready    master   10m     v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-control-plane3,kubernetes.io/os=linux,node-role.kubernetes.io/master=
hprudent-worker           Ready    <none>   9m41s   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-worker,kubernetes.io/os=linux
hprudent-worker2          Ready    <none>   9m41s   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-worker2,kubernetes.io/os=linux
hprudent-worker3          Ready    <none>   9m45s   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-worker3,kubernetes.io/os=linux
hprudent-worker4          Ready    <none>   9m44s   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-worker4,kubernetes.io/os=linux
ubuntu@vmi415900:~/environment/code/kubernetes/kind $ 

Anything else we need to know?:

No

Environment:

  • kind version: (use kind version):
kind version 0.8.1
  • Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version: (use docker info):
Client:
 Debug Mode: false

Server:
 Containers: 9
  Running: 8
  Paused: 0
  Stopped: 1
 Images: 3
 Server Version: 19.03.8
 Storage Driver: overlay2
  Backing Filesystem: <unknown>
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 
 runc version: 
 init version: 
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-40-generic
 Operating System: Ubuntu 20.04 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 6
 Total Memory: 19.55GiB
 Name: vmi415900.contaboserver.net
 ID: B2K5:O54T:ASVO:4DCJ:RHBZ:EY6R:IBG5:QWCP:CK5S:AQR5:5SSX:XX26
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
kinsupport

Most helpful comment

FYI it is documented. https://kind.sigs.k8s.io/docs/user/configuration/#kubeadm-config-patches

All 8 comments

/assign @amwat

Aside: that's a lot of workers. Note that unlike a "real" cluster these all share the same resources / kernel so typically you don't need more than one node unless you need at most 3 nodes to test rolling behavior of kubernetes internals ...

And actually the problem here is than node #3 is going to not be the one doing "init", it will be joining.

Make this your first node, or switch to join configuration.

@BenTheElder is there a working YAML sample ?

Moving it to the first node fix it.

Such information should be on the documentation to avoid tickets like this ones, as it mention "a" node not "first" node.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
    - |
      kind: InitConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 3080
    hostPort: 80
    protocol: TCP
  - containerPort: 30443
    hostPort: 443
    protocol: TCP
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
- role: worker
kubectl get nodes --show-labels
NAME                      STATUS     ROLES    AGE     VERSION   LABELS
hprudent-control-plane    Ready      master   2m41s   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingress-ready=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-control-plane,kubernetes.io/os=linux,node-role.kubernetes.io/master=
hprudent-control-plane2   Ready      master   2m      v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-control-plane2,kubernetes.io/os=linux,node-role.kubernetes.io/master=
hprudent-control-plane3   Ready      master   62s     v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hprudent-control-plane3,kubernetes.io/os=linux,node-role.kubernetes.io/master=

That's fair, though this technique does work on any single node (it can't be the same on all of them because of the port forward), it's just if that node happens to be the first node (#1 control plane) it has to be InitConfiguration and the guide is written with end-users in mind,

The reason we support multi-node is some internal kubernetes testing needs, otherwise we pretty much expect single node for the reasons outline aboved.

I don't see where the documentation says this:

as it mention "a" node not "first" node.

The bit about InitConfigruation / JoinConfiguration is kubeadm's APIs leaking.

In the future node labels will be abstracted to a first-class kind option.

FYI it is documented. https://kind.sigs.k8s.io/docs/user/configuration/#kubeadm-config-patches

Was this page helpful?
0 / 5 - 0 ratings