BUG REPORT
kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:38:10Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm64"}
Environment:
kubectl version):Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:38:10Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm64"}
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
UBUNTU_CODENAME=xenial
uname -a):Linux pine64-master 3.10.104-2-pine64-longsleep #113 SMP PREEMPT Thu Dec 15 21:46:07 CET 2016 aarch64 aarch64 aarch64 GNU/Linux
Trying to install Kubernetes Cluster version 1.8.2 using kubeadm on a Pine64 cluster.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
But after installing network add-on flannel 7.1 with:
curl -sSL https://rawgit.com/coreos/flannel/v0.7.1/Documentation/kube-flannel-rbac.yml | kubectl create -f -
curl -sSL https://rawgit.com/coreos/flannel/v0.7.1/Documentation/kube-flannel.yml | sed "s/amd64/arm64/g" | kubectl create -f -
The DNS server has status "ContainerCreating" but should be "Running".
ubuntu@pine64-master:/etc$ kubectl get all --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system ds/kube-flannel-ds 1 1 1 1 1 beta.kubernetes.io/arch=arm64 11m
kube-system ds/kube-proxy 1 1 1 1 1 <none> 16m
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deploy/kube-dns 1 1 1 0 16m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system rs/kube-dns-596cf7c484 1 1 0 16m
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deploy/kube-dns 1 1 1 0 16m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system ds/kube-flannel-ds 1 1 1 1 1 beta.kubernetes.io/arch=arm64 11m
kube-system ds/kube-proxy 1 1 1 1 1 <none> 16m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system rs/kube-dns-596cf7c484 1 1 0 16m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system po/etcd-pine64-master 1/1 Running 0 10m
kube-system po/kube-apiserver-pine64-master 1/1 Running 0 10m
kube-system po/kube-controller-manager-pine64-master 1/1 Running 0 10m
kube-system po/kube-dns-596cf7c484-mlqt2 0/3 ContainerCreating 0 16m
kube-system po/kube-flannel-ds-jqvpm 2/2 Running 0 11m
kube-system po/kube-proxy-24rw7 1/1 Running 0 16m
kube-system po/kube-scheduler-pine64-master 1/1 Running 0 10m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
kube-system svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 16m
ubuntu@pine64-master:/etc$ kubectl get node
NAME STATUS ROLES AGE VERSION
pine64-master Ready master 32m v1.8.2
md5-fcd490aff63b288d496d664403ec63fc
journalctl -xeu kubelet
md5-332c7152d7fe432c406f97cb190fb3ab
Nov 06 09:31:46 pine64-master kubelet[6267]: E1106 09:31:46.600317 6267 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 4; ignoring extra CPUs
Nov 06 09:31:46 pine64-master kubelet[6267]: E1106 09:31:46.972808 6267 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 4; ignoring extra CPUs
Nov 06 09:31:47 pine64-master kubelet[6267]: W1106 09:31:47.030082 6267 pod_container_deletor.go:77] Container "ab4544c41e2b92176a05603d5bfd704c915acaec3c92762450cd793541e2bb1c" not found in pod's containers
Nov 06 09:31:47 pine64-master kubelet[6267]: W1106 09:31:47.342174 6267 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "ab4544c41e2b92176a05603d5bfd704c915acaec3c92762450cd793541e2bb1c"
Nov 06 09:31:48 pine64-master kubelet[6267]: E1106 09:31:48.294558 6267 cni.go:301] Error adding network: open /run/flannel/subnet.env: no such file or directory
Nov 06 09:31:48 pine64-master kubelet[6267]: E1106 09:31:48.294641 6267 cni.go:250] Error while adding to cni network: open /run/flannel/subnet.env: no such file or directory
Nov 06 09:31:48 pine64-master kubelet[6267]: E1106 09:31:48.557486 6267 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-596cf7c484-mlqt2_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Nov 06 09:31:48 pine64-master kubelet[6267]: E1106 09:31:48.557669 6267 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-596cf7c484-mlqt2_kube-system(f30d5fd4-c2d1-11e7-a109-928cf36e6a0e)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-596cf7c484-mlqt2_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Nov 06 09:31:48 pine64-master kubelet[6267]: E1106 09:31:48.557753 6267 kuberuntime_manager.go:632] createPodSandbox for pod "kube-dns-596cf7c484-mlqt2_kube-system(f30d5fd4-c2d1-11e7-a109-928cf36e6a0e)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-596cf7c484-mlqt2_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Nov 06 09:31:48 pine64-master kubelet[6267]: E1106 09:31:48.558032 6267 pod_workers.go:182] Error syncing pod f30d5fd4-c2d1-11e7-a109-928cf36e6a0e ("kube-dns-596cf7c484-mlqt2_kube-system(f30d5fd4-c2d1-11e7-a109-928cf36e6a0e)"), skipping: failed to "CreatePodSandbox" for "kube-dns-596cf7c484-mlqt2_kube-system(f30d5fd4-c2d1-11e7-a109-928cf36e6a0e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-596cf7c484-mlqt2_kube-system(f30d5fd4-c2d1-11e7-a109-928cf36e6a0e)\" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod \"kube-dns-596cf7c484-mlqt2_kube-system\" network: open /run/flannel/subnet.env: no such file or directory"
Nov 06 09:31:48 pine64-master kubelet[6267]: E1106 09:31:48.902686 6267 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 4; ignoring extra CPUs
md5-332c7152d7fe432c406f97cb190fb3ab
ubuntu@pine64-master:/etc$ kubectl -n kube-system describe pod kube-dns-596cf7c484-mlqt2
md5-332c7152d7fe432c406f97cb190fb3ab
Name: kube-dns-596cf7c484-mlqt2
Namespace: kube-system
Node: pine64-master/10.9.0.114
Start Time: Mon, 06 Nov 2017 09:13:53 +0000
Labels: k8s-app=kube-dns
pod-template-hash=1527937040
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kube-dns-596cf7c484" ,"uid":"f300eafc-c2d1-11e7-a109-928cf36e...
Status: Pending
IP:
Created By: ReplicaSet/kube-dns-596cf7c484
Controlled By: ReplicaSet/kube-dns-596cf7c484
Containers:
kubedns:
Container ID:
Image: gcr.io/google_containers/k8s-dns-kube-dns-arm64:1.14.5
Image ID:
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-dir=/kube-dns-config
--v=2
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Environment:
PROMETHEUS_PORT: 10055
Mounts:
/kube-dns-config from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-j6w6b (ro)
dnsmasq:
Container ID:
Image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-arm64:1.14.5
Image ID:
Ports: 53/UDP, 53/TCP
Args:
-v=2
-logtostderr
-configDir=/etc/k8s/dns/dnsmasq-nanny
-restartDnsmasq=true
--
-k
--cache-size=1000
--log-facility=-
--server=/cluster.local/127.0.0.1#10053
--server=/in-addr.arpa/127.0.0.1#10053
--server=/ip6.arpa/127.0.0.1#10053
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 150m
memory: 20Mi
Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-j6w6b (ro)
sidecar:
Container ID:
Image: gcr.io/google_containers/k8s-dns-sidecar-arm64:1.14.5
Image ID:
Port: 10054/TCP
Args:
--v=2
--logtostderr
--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 10m
memory: 20Mi
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-j6w6b (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
kube-dns-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-dns
Optional: true
kube-dns-token-j6w6b:
Type: Secret (a volume populated by a Secret)
SecretName: kube-dns-token-j6w6b
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 21m (x22 over 26m) default-scheduler No nodes are available that match all of the predicates: NodeNotReady (1).
Normal Scheduled 20m default-scheduler Successfully assigned kube-dns-596cf7c484-mlqt2 to pine64-master
Normal SuccessfulMountVolume 20m kubelet, pine64-master MountVolume.SetUp succeeded for volume "kube-dns-config"
Normal SuccessfulMountVolume 20m kubelet, pine64-master MountVolume.SetUp succeeded for volume "kube-dns-token-j6w6b"
Warning FailedSync 19m (x9 over 20m) kubelet, pine64-master Error syncing pod
Normal SandboxChanged 5m (x335 over 20m) kubelet, pine64-master Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 15s (x447 over 20m) kubelet, pine64-master Failed create pod sandbox.
I'm running into the same exact issue, Ubuntu 16.04, Kubernetes 1.8.4
I also faced the same issue, centos7.4 Kubernetes 1.8.4 kubeadm 1.8.4
journalctl -xeu kubelet
Dec 01 23:46:47 vmnode1 kubelet[4284]: W1201 23:46:47.051360 4284 docker_sandbox.go:343] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "kube
Dec 01 23:46:47 vmnode1 kubelet[4284]: W1201 23:46:47.053507 4284 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "d6a
Dec 01 23:46:47 vmnode1 kubelet[4284]: E1201 23:46:47.054018 4284 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 01 23:46:47 vmnode1 kubelet[4284]: E1201 23:46:47.055911 4284 remote_runtime.go:115] StopPodSandbox "d6abc3654d407770193a57b7e9131b231315f6587b4a3b71dd9fe8b75cc1e51f" from runtime serv
Dec 01 23:46:47 vmnode1 kubelet[4284]: E1201 23:46:47.056040 4284 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "d6abc3654d407770193a57b7e9131b231315f6587b4a3b71dd9fe8b75cc1
Dec 01 23:46:47 vmnode1 kubelet[4284]: E1201 23:46:47.056128 4284 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "e4669e69-d6a5-11e7-b7df-0050563c
Dec 01 23:46:47 vmnode1 kubelet[4284]: E1201 23:46:47.056239 4284 pod_workers.go:182] Error syncing pod e4669e69-d6a5-11e7-b7df-0050563c50ac ("kube-dns-545bc4bfd4-l24jf_kube-system(e4669e6
Dec 01 23:46:54 vmnode1 kubelet[4284]: W1201 23:46:54.848066 4284 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Actually, I am pretty sure this is a flannel issue. I started up a cluster with weave and it did not encounter the sandbox issue.
I use calico replace flannel , run successfully at last
Only flannel or weave-net are available for arm64, I can't use calico.. (see the official documentation: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)
I 've tried weave-net but the pod failed ("crashloopbackoff" status). It seems that only the version 7.1 of flannel works but I face the issue I describe here before.
I finally found what was the problem: It seems that vxlan isn't supported on pine64...
When I install the network add-on flannel, I change "vxlan" by "udp" like this:
curl -sSL https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml | sed "s/amd64/arm64/g" | sed "s/vxlan/udp/g" | kubectl create -f -
And now, I have Kubernetes running on pine64!
I close this issue.
I tried kube-router and weave-net, both gave me the same issue.
Trying flannel and pods are working now, finally!
Thanks!
I am getting the same problem with flannel. So I removed flannel and installed weave. This worked for the master node, but the other nodes still exhibited this error and wouldn't start. So I removed kubernetes, installed it all over, but only installed the weave pod. This caused the coreDNS pods to fail with the same error. Why does coreDNS need flannel? Should I just use flannel and deal with all the issues, because coreDNS uses it?
I had a similiar problem with the Status NotReady for my Rasperyberry Pi Nodes and CoreDns couldnt reach Ready state (ContainerCreating state).
I was using the yaml from flannel from the master, but the configuration was for amd64 architecture.
Solved my problem with sed tool that replace all the instances with amd64 with arm and host-gw :
curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | sed "s/vxlan/host-gw/" > kube-flannel.yaml
After that I replay the configuration with kubectl apply and it works!
Hope that helps someone!
the flannel manifest from master is not recommended for kubeadm because it might include a breaking change.
a multi-arch flannel manifest SHA that is known to work is already recommended in the latest docs here:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
Most helpful comment
I'm running into the same exact issue, Ubuntu 16.04, Kubernetes 1.8.4