Azure-docs: Istio pilot pods in pending state

Created on 23 May 2019  Â·  11Comments  Â·  Source: MicrosoftDocs/azure-docs

I have deployed a single node AKS cluster and installed Istio on it. After install when I look at the number of running pods, its showing the pilot and policy pods in "PENDING" status.

NAME READY STATUS RESTARTS AGE
istio-citadel-7f699dc8c8-fpjw4 1/1 Running 0 103m
istio-galley-649bc8cd97-n88kt 1/1 Running 0 103m
istio-ingressgateway-65dfbd566-8hnrj 0/1 Running 0 103m
istio-init-crd-10-t6pg5 0/1 Completed 0 104m
istio-init-crd-11-fldx8 0/1 Completed 0 104m
istio-pilot-958dd8cc4-z5mnp 0/2 Pending 0 103m
istio-policy-86b4b7cf9-th4xn 0/2 Pending 0 103m
istio-sidecar-injector-d48786c5c-m84jr 1/1 Running 0 103m
istio-telemetry-7f6996fdcc-8wtrs 2/2 Running 2 103m
prometheus-67599bf55b-k9kfr 1/1 Running 0 103m


Document Details

⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

container-servicsvc cxp product-question triaged

Most helpful comment

I had the same problem, istio-pilot requires 2GB of RAM and often, if you use small nodes (I used Standard_B2ms as default) the scheduler cannot find enough ram on any node. Add a pool with larger size and/or scale up the current pool so you have a node with at least 2GB of RAM and pilot should be able to run.

All 11 comments

@swagkulkarni please provide us with the link to the doc you are following so we can better assist.

Here is the doc that I am following:
https://docs.microsoft.com/en-us/azure/aks/istio-install

Thanks for that! @jakaruna-MSFT will take a look shortly and provide any updates.

@swagkulkarni Can you describe one of the pending pod and post the events here?
Whats the overall capacity(CPU and Memory) of your cluster?
Also let me know how many node you have.

@swagkulkarni is there any update

Hi @jakaruna-MSFT - here is the describe command for the pod with pending state

swagat@Azure:~$ kubectl describe pods istio-pilot-958dd8cc4-z5mnp -n istio-system
Name: istio-pilot-958dd8cc4-z5mnp
Namespace: istio-system
Priority: 0
PriorityClassName:
Node:
Labels: app=pilot
chart=pilot
heritage=Tiller
istio=pilot
pod-template-hash=958dd8cc4
release=istio
Annotations: sidecar.istio.io/inject: false
Status: Pending
IP:
Controlled By: ReplicaSet/istio-pilot-958dd8cc4
Containers:
discovery:
Image: docker.io/istio/pilot:1.1.3
Ports: 8080/TCP, 15010/TCP
Host Ports: 0/TCP, 0/TCP
Args:
discovery
--monitoringAddr=:15014
--domain
cluster.local
--keepaliveMaxServerConnectionAge
30m
Requests:
cpu: 500m
memory: 2Gi
Readiness: http-get http://:8080/ready delay=5s timeout=5s period=30s #success=1 #failure=3
Environment:
POD_NAME: istio-pilot-958dd8cc4-z5mnp (v1:metadata.name)
POD_NAMESPACE: istio-system (v1:metadata.namespace)
GODEBUG: gctrace=1
PILOT_PUSH_THROTTLE: 100
PILOT_TRACE_SAMPLING: 1
PILOT_DISABLE_XDS_MARSHALING_TO_ANY: 1
KUBERNETES_PORT_443_TCP_ADDR: myaksclust-myresourcegroup-4e6886-75e7fef6.hcp.eastus.azmk8s.io
KUBERNETES_PORT: tcp://myaksclust-myresourcegroup-4e6886-75e7fef6.hcp.eastus.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://myaksclust-myresourcegroup-4e6886-75e7fef6.hcp.eastus.azmk8s.io:443
KUBERNETES_SERVICE_HOST: myaksclust-myresourcegroup-4e6886-75e7fef6.hcp.eastus.azmk8s.io
Mounts:
/etc/certs from istio-certs (ro)
/etc/istio/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from istio-pilot-service-account-token-b87pp (ro)
istio-proxy:
Image: docker.io/istio/proxyv2:1.1.3
Ports: 15003/TCP, 15005/TCP, 15007/TCP, 15011/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
proxy
--domain
$(POD_NAMESPACE).svc.cluster.local
--serviceCluster
istio-pilot
--templateFile
/etc/istio/proxy/envoy_pilot.yaml.tmpl
--controlPlaneAuthPolicy
MUTUAL_TLS
Limits:
cpu: 2
memory: 128Mi
Requests:
cpu: 100m
memory: 128Mi
Environment:
POD_NAME: istio-pilot-958dd8cc4-z5mnp (v1:metadata.name)
POD_NAMESPACE: istio-system (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
KUBERNETES_PORT_443_TCP_ADDR: myaksclust-myresourcegroup-4e6886-75e7fef6.hcp.eastus.azmk8s.io
KUBERNETES_PORT: tcp://myaksclust-myresourcegroup-4e6886-75e7fef6.hcp.eastus.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://myaksclust-myresourcegroup-4e6886-75e7fef6.hcp.eastus.azmk8s.io:443
KUBERNETES_SERVICE_HOST: myaksclust-myresourcegroup-4e6886-75e7fef6.hcp.eastus.azmk8s.io
Mounts:
/etc/certs from istio-certs (ro)
/var/run/secrets/kubernetes.io/serviceaccount from istio-pilot-service-account-token-b87pp (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio
Optional: false
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.istio-pilot-service-account
Optional: true
istio-pilot-service-account-token-b87pp:
Type: Secret (a volume populated by a Secret)
SecretName: istio-pilot-service-account-token-b87pp
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s (x6373 over 3d10h) default-scheduler 0/1 nodes are available: 1 Insufficient cpu.

@jakaruna-MSFT - I have single node cluster with following stats

Total cores - 2
Total memory - 7

Note that this is the default behaviour when I followed the instructions on the microsoft page to setup the AKS cluster.

@swagkulkarni You dont have enough capacity in the cluster.
Please notice the warning message
0/1 nodes are available: 1 Insufficient cpu. This error says that you have a one node cluster. That node is already running at full capacity and that node is unable to give cpu for the additional istio pod.

You can scale your cluster to use 3 nodes. This doc will help you to scale aks.

You can also use kubectl describe node <nodename> to find out the total and used capacity of that node.

I had the same problem, istio-pilot requires 2GB of RAM and often, if you use small nodes (I used Standard_B2ms as default) the scheduler cannot find enough ram on any node. Add a pool with larger size and/or scale up the current pool so you have a node with at least 2GB of RAM and pilot should be able to run.

@swagkulkarni
I will close this out for now. If you need additional help please let me know and we can reopen and continue.

try this:

kubectl taint nodes --all node-role.kubernetes.io/master-

this allow deployments on master.

istio pilot works after executed this command.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

JeffLoo-ong picture JeffLoo-ong  Â·  3Comments

ianpowell2017 picture ianpowell2017  Â·  3Comments

Ponant picture Ponant  Â·  3Comments

paulmarshall picture paulmarshall  Â·  3Comments

bityob picture bityob  Â·  3Comments