Eksctl: Bug: creating nodegroup w/ kubeletExtraConfig

Created on 6 May 2020  ·  3Comments  ·  Source: weaveworks/eksctl

What happened?
I'm unable to create a new nodegroup w/ my custom config while enabling feature gates. However, everything works correctly, if I'm removing kubeletExtraConfig from my config file.

What you expected to happen?
I expected the creation of a nodegroup with some feature gates enabled

How to reproduce it?
Here is my cluster config:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: some-name
  region: us-east-2

nodeGroups:
  - name: ng-5
    instanceType: t3.large
    desiredCapacity: 2
    iam:
      instanceProfileARN: someInstanceProfile
      instanceRoleARN: instanceInstanceRole
    securityGroups:
      attachIDs: [some security group]
    targetGroupARNs:
      - aws-target-goup
    kubeletExtraConfig:
      featureGates:
        StartupProbe: true
        RotateKubeletServerCertificate: true

Anything else we need to know?
Nothing special

Versions
Please paste in the output of these commands:

$ eksctl version
0.17.0

$ kubectl version --output=yaml

clientVersion:
  buildDate: "2020-04-08T17:38:50Z"
  compiler: gc
  gitCommit: 7879fc12a63337efff607952a323df90cdc7a335
  gitTreeState: clean
  gitVersion: v1.18.1
  goVersion: go1.13.9
  major: "1"
  minor: "18"
  platform: linux/amd64
serverVersion:
  buildDate: "2020-03-27T21:51:36Z"
  compiler: gc
  gitCommit: af3caf6136cd355f467083651cc1010a499f59b1
  gitTreeState: clean
  gitVersion: v1.15.11-eks-af3caf
  goVersion: go1.12.17
  major: "1"
  minor: 15+
  platform: linux/amd64

Logs
Here are the logs of eksctl create nodegroup -f cluster.yaml when I'm trying to create new nodegroup(ng-5) w/ kubeletExtraConfig

[ℹ️]  eksctl version 0.17.0
[ℹ️]  using region us-east-2
[ℹ️]  will use version 1.15 for new nodegroup(s) based on control plane version
[ℹ️]  2 nodegroup(s) that already exist (ng-1,ng-4) will be excluded
[ℹ️]  nodegroup "ng-5" will use "-" [AmazonLinux2/1.15]
[ℹ️]  1 nodegroup (ng-5) was included (based on the include/exclude rules)
[ℹ️]  combined exclude rules: ng-1,ng-4
[ℹ️]  1 nodegroup (ng-4) was excluded (based on the include/exclude rules)
[ℹ️]  will create a CloudFormation stack for each of 1 nodegroups in cluster "-"
[ℹ️]  2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create nodegroup "ng-5" } } }
[ℹ️]  checking cluster stack for missing resources
[ℹ️]  cluster stack is missing resources for Fargate
[ℹ️]  adding missing resources to cluster stack
[ℹ️]  re-building cluster stack "eksctl-cluster"
[✔]  all resources in cluster stack "eksctl-cluster" are up-to-date
[ℹ️]  building nodegroup stack "eksctl-nodegroup-ng-5"
[ℹ️]  --nodes-min=2 was set automatically for nodegroup ng-5
[ℹ️]  --nodes-max=2 was set automatically for nodegroup ng-5
[ℹ️]  deploying stack "eksctl-nodegroup-ng-5"
[ℹ️]  adding identity "eks-node-group identity" to auth ConfigMap
[ℹ️]  nodegroup "ng-5" has 0 node(s)
[ℹ️]  waiting for at least 2 node(s) to become ready in "ng-5"

More verbosity just shows AWS credentials and other DEBUG statements, but no errors are shown, and the eksctl just hangs waiting for at least 2 nodes to become ready. it hangs like that for 25 mins and then times out.

Logs, when I'm trying to create new nodegroup(ng-1) without kubeletExtraConfig, but all other settings remain the same.

[ℹ️]  eksctl version 0.17.0
[ℹ️]  using region us-east-2
[ℹ️]  will use version 1.15 for new nodegroup(s) based on control plane version
[ℹ️]  2 nodegroup(s) that already exist (ng-4,ng-5) will be excluded
[ℹ️]  nodegroup "ng-1" will use "-" [AmazonLinux2/1.15]
[ℹ️]  1 nodegroup (ng-1) was included (based on the include/exclude rules)
[ℹ️]  combined exclude rules: ng-4,ng-5
[ℹ️]  1 nodegroup (ng-4) was excluded (based on the include/exclude rules)
[ℹ️]  will create a CloudFormation stack for each of 1 nodegroups in cluster "-"
[ℹ️]  2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create nodegroup "ng-1" } } }
[ℹ️]  checking cluster stack for missing resources
[ℹ️]  cluster stack is missing resources for Fargate
[ℹ️]  adding missing resources to cluster stack
[ℹ️]  re-building cluster stack "eksctl-cluster"
[✔]  all resources in cluster stack "eksctl-cluster" are up-to-date
[ℹ️]  building nodegroup stack "eksctl-nodegroup-ng-1"
[ℹ️]  --nodes-min=2 was set automatically for nodegroup ng-1
[ℹ️]  --nodes-max=2 was set automatically for nodegroup ng-1
[ℹ️]  deploying stack "eksctl-nodegroup-ng-1"
[ℹ️]  adding identity "eks-nodegrop-identity" to auth ConfigMap
[ℹ️]  nodegroup "ng-1" has 2 node(s)
[ℹ️]  node "ip-...-.compute.internal" is not ready
[ℹ️]  node "ip-...-.compute.internal" is not ready
[ℹ️]  waiting for at least 2 node(s) to become ready in "ng-1"
[ℹ️]  nodegroup "ng-1" has 2 node(s)
[ℹ️]  node "ip-...-.compute.internal" is ready
[ℹ️]  node "ip-...-.compute.internal" is ready
[✔]  created 1 nodegroup(s) in cluster "-"
[✔]  created 0 managed nodegroup(s) in cluster "-"
[ℹ️]  checking security group configuration for all nodegroups
[ℹ️]  all nodegroups have up-to-date configuration
awaiting more information kinbug prioritimportant-longterm

Most helpful comment

Hi @Raduan77 (cc @Jihadik) thank you for reporting this.

I see you are using Kubernetes 1.15. I think that the StartupProbe is not supported until 1.16. Can you try again with 1.16 (please note you will need eksctl 0.19.0)?

All 3 comments

+1, any updates here?

Hi @Raduan77 (cc @Jihadik) thank you for reporting this.

I see you are using Kubernetes 1.15. I think that the StartupProbe is not supported until 1.16. Can you try again with 1.16 (please note you will need eksctl 0.19.0)?

Hey, @martina-if, thank you very much. The problem was fixed after updating to Kubernetes 1.16 :)

Was this page helpful?
0 / 5 - 0 ratings