Kubespray: Pod tiller can not run

Created on 18 May 2018  路  3Comments  路  Source: kubernetes-sigs/kubespray

Pod tiller can not run.

pods

# kubectl get pods --all-namespaces -owide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   calico-node-cpr27                       1/1       Running   8          1d        192.168.88.12   k8s-m2.me
kube-system   calico-node-w42rq                       1/1       Running   2          2d        192.168.88.11   k8s-m1.me
kube-system   kube-apiserver-k8s-m1.me                1/1       Running   0          11m       192.168.88.11   k8s-m1.me
kube-system   kube-apiserver-k8s-m2.me                1/1       Running   0          11m       192.168.88.12   k8s-m2.me
kube-system   kube-controller-manager-k8s-m1.me       1/1       Running   0          11m       192.168.88.11   k8s-m1.me
kube-system   kube-controller-manager-k8s-m2.me       1/1       Running   0          11m       192.168.88.12   k8s-m2.me
kube-system   kube-dns-7bd4d5fbb6-ljbx5               3/3       Running   0          9m        10.233.78.175   k8s-m1.me
kube-system   kube-dns-7bd4d5fbb6-pbgd6               3/3       Running   0          10m       10.233.78.174   k8s-m1.me
kube-system   kube-proxy-k8s-m1.me                    1/1       Running   2          2d        192.168.88.11   k8s-m1.me
kube-system   kube-proxy-k8s-m2.me                    1/1       Running   2          7h        192.168.88.12   k8s-m2.me
kube-system   kube-scheduler-k8s-m1.me                1/1       Running   0          11m       192.168.88.11   k8s-m1.me
kube-system   kube-scheduler-k8s-m2.me                1/1       Running   0          11m       192.168.88.12   k8s-m2.me
kube-system   kubedns-autoscaler-679b8b455-g9zss      1/1       Running   0          1h        10.233.78.165   k8s-m1.me
kube-system   kubernetes-dashboard-55fdfd74b4-knzvf   1/1       Running   0          1h        10.233.78.167   k8s-m1.me
kube-system   tiller-deploy-75b7d95f5c-trchl          0/1       Pending   0          2d        <none>          <none>

describe tiller-deploy-75b7d95f5c-trchl pod (we have warning events)

# kubectl describe pods tiller-deploy-75b7d95f5c-trchl --namespace kube-system
Name:           tiller-deploy-75b7d95f5c-trchl
Namespace:      kube-system
Node:           <none>
Labels:         app=helm
                name=tiller
                pod-template-hash=3163851917
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/tiller-deploy-75b7d95f5c
Containers:
  tiller:
    Image:       gcr.io/kubernetes-helm/tiller:v2.8.1
    Ports:       44134/TCP, 44135/TCP
    Host Ports:  0/TCP, 0/TCP
    Liveness:    http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:   http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from tiller-token-zlj6x (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  tiller-token-zlj6x:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  tiller-token-zlj6x
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  59m (x5 over 59m)    default-scheduler  0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.
  Warning  FailedScheduling  38m                  default-scheduler  0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were not ready, 1 node(s) were out of disk space, 1 node(s) were unschedulable.
  Warning  FailedScheduling  28m (x100 over 59m)  default-scheduler  0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.
  Warning  FailedScheduling  14m (x36 over 24m)   default-scheduler  0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.
  Warning  FailedScheduling  1m (x36 over 12m)    default-scheduler  0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.

tiller-deploy service

# kubectl describe services tiller-deploy --namespace kube-system
Name:              tiller-deploy
Namespace:         kube-system
Labels:            app=helm
                   name=tiller
Annotations:       <none>
Selector:          app=helm,name=tiller
Type:              ClusterIP
IP:                10.233.24.102
Port:              tiller  44134/TCP
TargetPort:        tiller/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>

nodes

# kubectl get nodes -owide
NAME        STATUS                     ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
k8s-m1.me   Ready                      master    2d        v1.10.2   <none>        Ubuntu 17.10   4.13.0-21-generic   docker://17.12.1-ce
k8s-m2.me   Ready,SchedulingDisabled   master    1d        v1.10.2   <none>        Ubuntu 17.10   4.13.0-21-generic   docker://17.12.1-ce

node k8s-m2.me describe

# kubectl describe node k8s-m2.me
Name:               k8s-m2.me
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-m2.me
                    node-role.kubernetes.io/master=true
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Wed, 16 May 2018 21:15:49 +0300
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      true
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Fri, 18 May 2018 15:55:33 +0300   Fri, 18 May 2018 15:14:39 +0300   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Fri, 18 May 2018 15:55:33 +0300   Fri, 18 May 2018 15:14:39 +0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 18 May 2018 15:55:33 +0300   Fri, 18 May 2018 15:14:39 +0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 18 May 2018 15:55:33 +0300   Wed, 16 May 2018 21:15:49 +0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 18 May 2018 15:55:33 +0300   Fri, 18 May 2018 15:14:39 +0300   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.88.12
  Hostname:    k8s-m2.me
Capacity:
 cpu:                8
 ephemeral-storage:  1904958832Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             16376696Ki
 pods:               110
Allocatable:
 cpu:                7800m
 ephemeral-storage:  1755610056665
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             15774296Ki
 pods:               110
System Info:
 Machine ID:                 2b1ea6a957434b129f4172af72c92ac8
 System UUID:                00000000-0000-0000-0000-4CCC6A60FCC8
 Boot ID:                    321ffbd2-540c-4827-9830-3abe46d1e844
 Kernel Version:             4.13.0-21-generic
 OS Image:                   Ubuntu 17.10
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.10.2
 Kube-Proxy Version:         v1.10.2
ExternalID:                  k8s-m2.me
Non-terminated Pods:         (5 in total)
  Namespace                  Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                 ------------  ----------  ---------------  -------------
  kube-system                calico-node-cpr27                    150m (1%)     300m (3%)   64M (0%)         500M (3%)
  kube-system                kube-apiserver-k8s-m2.me             100m (1%)     800m (10%)  256M (1%)        2G (12%)
  kube-system                kube-controller-manager-k8s-m2.me    100m (1%)     250m (3%)   100M (0%)        512M (3%)
  kube-system                kube-proxy-k8s-m2.me                 150m (1%)     500m (6%)   64M (0%)         2G (12%)
  kube-system                kube-scheduler-k8s-m2.me             80m (1%)      250m (3%)   170M (1%)        512M (3%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits   Memory Requests  Memory Limits
  ------------  ----------   ---------------  -------------
  580m (7%)     2100m (26%)  654M (4%)        5524M (34%)
Events:
  Type    Reason                   Age                From                   Message
  ----    ------                   ----               ----                   -------
  Normal  Starting                 41m                kubelet, k8s-m2.me     Starting kubelet.
  Normal  NodeHasSufficientDisk    41m (x6 over 41m)  kubelet, k8s-m2.me     Node k8s-m2.me status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  41m (x6 over 41m)  kubelet, k8s-m2.me     Node k8s-m2.me status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    41m (x6 over 41m)  kubelet, k8s-m2.me     Node k8s-m2.me status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     41m (x6 over 41m)  kubelet, k8s-m2.me     Node k8s-m2.me status is now: NodeHasSufficientPID
  Normal  Starting                 41m                kube-proxy, k8s-m2.me  Starting kube-proxy.

hosts

k8s-m1.me ansible_host=192.168.88.11 ansible_become=yes
k8s-m2.me ansible_host=192.168.88.12 ansible_become=yes
k8s-m3.me ansible_host=192.168.88.10 ansible_become=yes #bootstrap_os=linux_mint

[kube-master]
k8s-m1.me
k8s-m2.me
#k8s-m3.me

[etcd]
k8s-m1.me
#k8s-m2.me
#k8s-m3.me

[kube-node]
# k8s-s2.me

[kube-ingress]
# node2
# node3

[k8s-cluster:children]
kube-master
kube-node
kube-ingress

Most helpful comment

allow master run pod:
kubectl taint nodes --all node-role.kubernetes.io/master-

All 3 comments

I fix it with add all my nodes in kube-node group (in my_inventory/inventory/hosts) and call:
$ ansible-playbook -i my_inventory/inventory remove-node.yml -b -vvv
and after again:
$ ansible-playbook -i my_inventory/inventory cluster.yml -b -vvv

allow master run pod:
kubectl taint nodes --all node-role.kubernetes.io/master-

@IvanBiv
can you share me file :
remove-node.yml and cluster.yml
Thanks you

Was this page helpful?
0 / 5 - 0 ratings