Flannel: pod cidr not assgned

Created on 19 May 2017  路  12Comments  路  Source: coreos/flannel

Seeing something strange. if I delete the node from k8s and reboot it, flannel fails to fully start, getting stuck in a loop:

E0519 00:37:28.550351 1 network.go:102] failed to register network: failed to acquire lease: node "k8s-test-2.novalocal" pod cidr not assigned
E0519 00:37:29.551107 1 network.go:102] failed to register network: failed to acquire lease: node "k8s-test-2.novalocal" pod cidr not assigned
E0519 00:37:30.551851 1 network.go:102] failed to register network: failed to acquire lease: node "k8s-test-2.novalocal" pod cidr not assigned
E0519 00:37:31.552629 1 network.go:102] failed to register network: failed to acquire lease: node "k8s-test-2.novalocal" pod cidr not assigned
E0519 00:37:32.553437 1 network.go:102] failed to register network: failed to acquire lease: node "k8s-test-2.novalocal" pod cidr not assigned
E0519 00:37:33.554245 1 network.go:102] failed to register network: failed to acquire lease: node "k8s-test-2.novalocal" pod cidr not assigned

I don't see the expected flannel annotations being added to the node either. all the other nodes in the system seem to work though. This is with the newest self hosted flannel in k8s 1.6.3

kinsupport

Most helpful comment

try the blow,may be useful
edit /etc/kubernetes/manifests/kube-controller-manager.yaml
at command ,add
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
then,reload kubelet
my situation is ,update kubernete 1.7.1 to 1.7.4,and /etc/kubernetes/manifests cidr paramers was lost.

All 12 comments

What do you mean by

if I delete the node from k8s and reboot it

Flannel needs to fetch the podCidr from the node before it can start.

for some reason that one node isn't working. I did a:
kubectl delete node k8s-test-2.novalocal

then rebooted the node to ensure it wasn't carrying any state in k8s so it would retry to register the node to see if it woudl get a new pod network configured to fix the issue. it didn't help.

doing a diff between a working node and the non working one shows:
-Annotations: node.alpha.kubernetes.io/ttl=0

  • kubernetes.io/hostname=k8s-test-4.novalocal
    +Annotations: flannel.alpha.coreos.com/backend-data={"VtepMAC":"e2:a9:8b:50:dc:eb"}
  • flannel.alpha.coreos.com/backend-type=vxlan
  • flannel.alpha.coreos.com/kube-subnet-manager=true
  • flannel.alpha.coreos.com/public-ip=172.20.207.11
  • node.alpha.kubernetes.io/ttl=0

flannel annotations are not being added to the broken node. I'm not sure if thats relevant or not.

How does flannel acquire an ip range when using the k8s manager?

I'm facing the same issue with Kubernetes 1.6.4 (kubeadm). Applied the files below:

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# kubectl -n kube-system logs -c kube-flannel kube-flannel-ds-zj00j
I0526 22:48:02.412713       1 kube.go:111] Waiting 10m0s for node controller to sync
I0526 22:48:02.412795       1 kube.go:315] starting kube subnet manager
I0526 22:48:03.413179       1 kube.go:118] Node controller sync successful
I0526 22:48:03.413240       1 main.go:132] Installing signal handlers
I0526 22:48:03.413374       1 manager.go:136] Determining IP address of default interface
I0526 22:48:03.414338       1 manager.go:149] Using interface with name eth0 and address 10.10.10.10
I0526 22:48:03.414379       1 manager.go:166] Defaulting external address to interface address (10.10.10.10)
E0526 22:48:03.469315       1 network.go:102] failed to register network: failed to acquire lease: node "host.example.com" pod cidr not assigned

@kfox1111 The node needs to have a podCidr. Can you check if it does - kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'

@gtirloni Did you see this note in the kubeadm docs

There are pod network implementations where the master also plays a role in allocating a set of network address space for each node. When using flannel as the pod network (described in step 3), specify --pod-network-cidr=10.244.0.0/16. This is not required for any other networks besides Flannel.

yes. not all nodes didn't get their allocations, only some nodes. that really seems like a bug to me.

馃憤 You'll need to raise that with kubeadm team

If some nodes get assigned a pod cidr but others not, is that a kubernetes issue or a kubeadm issue? I would think k8s itself maybe?

or does flannel ask for a pod cidr to be allocated by the api server?

try the blow,may be useful
edit /etc/kubernetes/manifests/kube-controller-manager.yaml
at command ,add
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
then,reload kubelet
my situation is ,update kubernete 1.7.1 to 1.7.4,and /etc/kubernetes/manifests cidr paramers was lost.

try the blow,may be useful
edit /etc/kubernetes/manifests/kube-controller-manager.yaml
at command ,add
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
then,reload kubelet
my situation is ,update kubernete 1.7.1 to 1.7.4,and /etc/kubernetes/manifests cidr paramers was lost.

thanks

kubeadm init --pod-network-cidr=10.244.0.0/16

Was this page helpful?
0 / 5 - 0 ratings