Cilium: Deploying cilium demo on minikube

Created on 19 Nov 2020  路  3Comments  路  Source: cilium/cilium

I followed the instruction of how to enable cillium on minikube cluster on official website. After kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.9.0/install/kubernetes/quick-install.yaml, I find that there is one pod named cilium-operator-5d8498fc44-5rccz remaining PENDING status.

Then I use kubectl -n kube-system describe pod cilium-operator-5d8498fc44-5rccz to check the log, which is saying:

Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  18s (x12 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules.

It seems to be some CPU affinity config in quick-deploy.yaml, is there any clues for it?

Thanks.

kincommunity-report kinquestion

Most helpful comment

@joestringer I thought something similar but the little different we would make the default as 2 but the quick install with 1

All 3 comments

I followed the instruction of how to enable cillium on minikube cluster on official website. After kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.9.0/install/kubernetes/quick-install.yaml, I find that there is one pod named cilium-operator-5d8498fc44-5rccz remaining PENDING status.

Then I use kubectl -n kube-system describe pod cilium-operator-5d8498fc44-5rccz to check the log, which is saying:

Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  18s (x12 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules.

It seems to be some CPU affinity config in quick-deploy.yaml, is there any clues for it?

Thanks.

@hazelnutsgz that's expected behavior since the operator is a Deployment setup with 2 replicas which only one can run on the host. Since minikube is a single-host cluster, only 1 pod of the operator deployment can run.

I wonder if we should set the default to 1 and then just describe to users how to enable HA in their cluster in a separate guide (including configuring the number of replicas), that way the default quick-install.yaml + development clusters like minikube won't hit this unexpected behaviour. Any thoughts on that @aanm / @fristonio ?

@joestringer I thought something similar but the little different we would make the default as 2 but the quick install with 1

Was this page helpful?
0 / 5 - 0 ratings

Related issues

brb picture brb  路  4Comments

aledbf picture aledbf  路  4Comments

thejosephstevens picture thejosephstevens  路  3Comments

tklauser picture tklauser  路  3Comments

aanm picture aanm  路  3Comments