I tested kubeadm on CentOS 7, did exactly what the document stated and everything work as expected.
when I created a pod I had the following error:
1m 1m 4 {kubelet server.name} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "appname-a94fb6130669dde66c7ebe8cd697c498463fa721-5rg6i_default" with SetupNetworkError: "Failed to setup network for pod \"appname-a94fb6130669dde66c7ebe8cd697c498463fa721-5rg6i_default(45543fa1-b1fe-11e6-af4d-5254002465aa)\" using network plugins \"cni\": cni config unintialized; Skipping pod"
creating at the master and slaves /etc/cni/net.d fixed the problem.
I'm experiencing the same on ubuntu xenial.
@adiri what do you mean with "creating /etc/cni/net.d"? Just create an empty directory?
kubeadm doesn't create this, a network daemonset should. Are your running weave or flannel in a daemonset?
@mikedanese Yes, I applied the weave daemonset (kubectl apply -f https://git.io/weave-kube)
I also tried kubeadm init --pod-network-cidr=10.244.0.0/16 but same problem.
Then this might be a problem with weave-kube cc @errordeveloper @lukemarsden
I'm also experiencing this with a fresh Digital Ocean 4GB 16.04 VM (single node cluster). I followed http://kubernetes.io/docs/getting-started-guides/kubeadm/ and didn't install a network daemonset, as it wasn't mentioned as required.
kube-dns and kube-dashboard are the pods that fail to come up.
I hope that helps.
Edit: I see a network daemonset is required, my mistake.
Second edit: Starting from scratch, after kubeadm init, I installed flannel with kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
After this, kube-dns has still failed, presumably because it didn't retry. Is it a bug to try and setup kube-dns (which relies on a network daemonset), before there is a network daemonset?
On an Ubuntu 16.04.1 LTS, with a master and 3 nodes in an internal static network (192.168.2.99 to 102) but having each also an IP address directly on the Internet, I have installed k8 (1.5.1), then initiated with
kubeadm init --api-advertise-addresses=192.168.2.99
to link the cluster on the internal network and not expose it.
I then installed my weave-kube by typing
kubectl apply -f https://git.io/weave-kube
kubectl get pods --namespace=kube-system shows that everything seems to be ok, but when I add successfully the nodes, I see regular crashes of the weave-net-xxxx pods.
When I deploy some demo application, I have the same message as above. (Error syncing pod, skipping: failed to "SetupNetwork" ).
When I check the logs of the proxy pod, kubectl logs kube-proxy-g7qh1 --namespace=kube-system
I get the following info: proxier.go:254] clusterCIDR not specified, unable to distinguish between internal and external traffic
The test for an empty clusterCIDR in pkg/proxy/iptables/proxier.go seems rather recent, so is kubeadm adapted to ensure that this flag is properly set? In the templates (.json), I don't find it and I don't know where to add this flag to ensure that it is properly taken by the proxy...
@damaspi i've opened #102 to address your issue as it's not related to this one.
kubeadm should not create the /etc/cni/net.d directory; that's a task for the CNI DaemonSet plugins.
And the other issue that's faced here is handled in #102
I am having similar problems but _only_ when I use an internal IP address for kubeadm init --api-advertise (I have two interfaces, one internal one external). Is this related to #102?
@funnydevnull I can't say, please open a new issue with more details and we'll see
Most helpful comment
I am having similar problems but _only_ when I use an internal IP address for kubeadm init --api-advertise (I have two interfaces, one internal one external). Is this related to #102?