
kubectl label node ${node} node-role.kubernetes.io/worker=worker
kubectl label node ${node} node-role.kubernetes.io/worker=worker
I know that this cmd works, but is there any way to specify this label as a parament in the k3s agent start cmd ?
kubectl label node ${node} node-role.kubernetes.io/worker=worker
I know that this cmd works, but is there any way to specify this label as a parament in the k3s agent start cmd ?
@colben you may use k3s agent --kublet-arg node-labels=node-role.kubernetes.io/worker=worker if you wish for the role label to be present at node registration or you may use k3s agent --node-label node-role.kubernetes.io/worker=worker if you wish the role label to be present at registration as well as applied each time the kublet starts up (potentially reverting a change to the node's role label applied at runtime and recorded by the cluster).
Looks like this doesn't work, it will produce a fatal error like:
F0114 10:32:04.072486 11355 server.go:186] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/worker]
--node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/os, node.kubernetes.io/instance-type, topology.kubernetes.io/region, topology.kubernetes.io/zone)
Love to apply magic labels. Thanks for double-checking me @erikwilson !
you may use
k3s agent --kublet-arg node-labels=node-role.kubernetes.io/worker=workerif you wish for the role label to be present at node registration or you may usek3s agent --node-label node-role.kubernetes.io/worker=workerif you wish the role label to be present at registration as well as applied each time the kublet starts up (potentially reverting a change to the node's role label applied at runtime and recorded by the cluster).
typo here, kublet should be kubelet, but it still has problems.
The --node-label='node-role.kubernetes.io/worker=worker' option doesn't work
Oct 09 11:09:09 worker k3s[6927]: --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/os, node.kubernetes.io/instance-type, topology.kubernetes.io/region, topology.kubernetes.io/zone)
Oct 09 11:09:09 worker k3s[6927]: F1009 11:09:09.488727 6927 server.go:187] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/worker]
Yes, the node-role.kubernetes.io label namespace has been deprecated upstream.
Most helpful comment
kubectl label node ${node} node-role.kubernetes.io/worker=worker