Eksctl: MapPublicIpOnLaunch preventing managedNodeGroup on private subnet from launching

Created on 26 Jun 2020  Â·  5Comments  Â·  Source: weaveworks/eksctl

What happened?

#!/usr/bin/env eksctl create nodegroup -f
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: example-cluster
  region: us-east-1
vpc:
  id: "vpc1"
  clusterEndpoints:
    publicAccess:  true
    privateAccess: false
  subnets:
    private:
      us-east-1a:
          id: "private-1"
      us-east-1b:
          id: "private-2"
      us-east-1c:
          id: "private-3"
    public:
      us-east-1a:
          id: "dmz-1"
      us-east-1b:
          id: "dmz-2"
      us-east-1c:
          id: "dmz-3"

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

managedNodeGroups:
  - name: mng-01
    instanceType: r5.2xlarge
    privateNetworking: true
    minSize: 2
    desiredCapacity: 2
    maxSize: 3
    availabilityZones:
      - us-east-1c
    volumeSize: 100
    ssh:
      allow: false
    labels: {role: worker}
    iam:
      withAddonPolicies:
        externalDNS: true
        ebs: true
        albIngress: true

returns

[✖]  found mis-configured subnets ["dmz-3"]. Expected public subnets with property "MapPublicIpOnLaunch" enabled. Without it new nodes won't get an IP assigned

What you expected to happen?
I expected there to be no error message, and it to launch a nodegroup using the ClusterConfig.

Anything else we need to know?
This cluster config uses a pre-existing VPC/Subnet setup, and I am able to use the above config to create a new cluster (with node groups) without enabling the MapPublicIpOnLaunch on my public subnets.

I am using a shared VPC setup, and the plan is to have the nodeGroups run on private subnets. The only reason I included the public subnets is to aid EKS communication via public load balancers.

Versions
Please paste in the output of these commands:
`` $ eksctl version: 0.20.0 $ kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:51:23Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-e16311", GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

kinbug

All 5 comments

Hello, I am going to mention a few people from some related issues and pull requests. Please let me know if I can help gather any more required information. Thank you for your time!

https://github.com/weaveworks/eksctl/issues/2250 - Other issue, possibly other interested users.
@shaikatz
@michaelbeaumont

https://github.com/weaveworks/eksctl/pull/2002 - Check for MapPublicIpOnLaunch on eksctl create nodegroup
@martina-if
@cPu1

https://github.com/weaveworks/eksctl/pull/1791 - Private Networking on Managed Nodegroups
@sayboras

Hi @mrenteria Thank you for reporting this.

I've tried to reproduce this but I didn't get that error. This is because when one tries to create a nodegroup with privateNetworking enabled this check is skipped.


eksctl create nodegroup -f cluster.yaml --include private-2

$ cat cluster.yaml
...
  - name: private-2
    instanceType: t3.medium
    desiredCapacity: 1
    privateNetworking: true
...
$ eksctl create nodegroup --config-file=cluster.yaml --include=private-2 #gosetup
[ℹ]  eksctl version 0.24.0-dev
[ℹ]  using region eu-north-1
[ℹ]  will use version 1.15 for new nodegroup(s) based on control plane version
[ℹ]  nodegroup "ng-1" present in the given config, but missing in the cluster
[ℹ]  nodegroup "ng-2" present in the given config, but missing in the cluster
[ℹ]  nodegroup "private-2" present in the given config, but missing in the cluster
[ℹ]  nodegroup "ng-5" present in the cluster, but missing from the given config
[ℹ]  4 existing nodegroup(s) (ng-3,ng-4,ng-5,private-1) will be excluded
[ℹ]  nodegroup "private-2" will use "ami-0d6711173ab77aaf9" [AmazonLinux2/1.15]
[ℹ]  combined include rules: private-2
[ℹ]  1 nodegroup (private-2) was included (based on the include/exclude rules)
[ℹ]  will create a CloudFormation stack for each of 1 nodegroups in cluster "martina-shared-vpc"
[ℹ]  2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create nodegroup "private-2" } } }
[ℹ]  checking cluster stack for missing resources
[ℹ]  cluster stack has all required resources
[ℹ]  building nodegroup stack "eksctl-martina-shared-vpc-nodegroup-private-2"
[ℹ]  --nodes-min=1 was set automatically for nodegroup private-2
[ℹ]  --nodes-max=1 was set automatically for nodegroup private-2
[ℹ]  deploying stack "eksctl-martina-shared-vpc-nodegroup-private-2"

^ As you can see in these logs I don't get that error.

Can you post all the logs that appear when you run the command? Also, do you have more nodegroups defined in that config file?

Hello @martina-if,

Thank you for taking a look. Please correct me if I'm wrong, but that check specifically is only for NodeGroups. A few lines below, the check for ManagedNodeGroups is missing:

if ng.PrivateNetworking {
continue
}

https://github.com/weaveworks/eksctl/blob/master/pkg/vpc/vpc.go#L258-L263

It looks like the Private Networking flag was merged into master for managedNodeGroups after the merge request to check for MapPublicIpOnLaunch. That would explain why there is a check missing.

2387 - I opened a pull request, built the binary, and tested. It seems like adding the if statement to continue on privateNetworking flag fixed the issue for my use case. I added WIP, because I plan to document my before and after config, and also look deeper into your contributor readme to ensure I followed the correct guidelines.

1.) Create cluster with config:

#!/usr/bin/env eksctl create cluster --write-kubeconfig=false -f
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: example-cluster
  region: us-east-1
vpc:
  id: "vpc-1"
  clusterEndpoints:
    publicAccess:  true
    privateAccess: false
  subnets:
    private:
      us-east-1a:
          id: "private-1"
      us-east-1b:
          id: "private-2"
    public:
      us-east-1a:
          id: "public-1"
      us-east-1b:
          id: "public-2"

managedNodeGroups:
  - name: mng-1
    instanceType: t2.micro
    privateNetworking: true
    availabilityZones: ["us-east-1a", "us-east-1b"]
    volumeSize: 100

2.) Rename managedNodeGroup

#!/usr/bin/env eksctl create cluster --write-kubeconfig=false -f
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: example-cluster
  region: us-east-1
vpc:
  id: "vpc-1"
  clusterEndpoints:
    publicAccess:  true
    privateAccess: false
  subnets:
    private:
      us-east-1a:
          id: "private-1"
      us-east-1b:
          id: "private-2"
    public:
      us-east-1a:
          id: "public-1"
      us-east-1b:
          id: "public-2"

managedNodeGroups:
  - name: mng-2
    instanceType: t2.micro
    privateNetworking: true
    availabilityZones: ["us-east-1a", "us-east-1b"]
    volumeSize: 100

3.) Try to create new nodeGroup from same config.

➜  eksctl git:(master) ✗ eksctl create nodegroup -f cluster.yaml 
[ℹ]  eksctl version 0.24.0-dev+146c6cba.2020-06-30T17:22:58Z
[ℹ]  using region us-east-1
[ℹ]  will use version 1.16 for new nodegroup(s) based on control plane version
[ℹ]  nodegroup "mng-2" present in the given config, but missing in the cluster
[ℹ]  nodegroup "mng-1" present in the cluster, but missing from the given config
[ℹ]  1 existing nodegroup(s) (mng-1) will be excluded
[✖]  found mis-configured subnets ["public-1" "public-2"]. Expected public subnets with property "MapPublicIpOnLaunch" enabled. Without it new nodes won't get an IP assigned

4.) Introduce change from PR (#2387)

5.) Try to create new nodeGroup from same config.

➜  eksctl git:(master) ✗ eksctl create nodegroup -f cluster.yaml 
[ℹ]  eksctl version 0.24.0-dev+146c6cba.2020-06-30T19:30:39Z
[ℹ]  using region us-east-1
[ℹ]  will use version 1.16 for new nodegroup(s) based on control plane version
[ℹ]  nodegroup "mng-2" present in the given config, but missing in the cluster
[ℹ]  nodegroup "mng-1" present in the cluster, but missing from the given config
[ℹ]  1 existing nodegroup(s) (mng-1) will be excluded
[ℹ]  1 nodegroup (mng-2) was included (based on the include/exclude rules)
[ℹ]  will create a CloudFormation stack for each of 1 managed nodegroups in cluster "example-cluster"
[ℹ]  2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "mng-2" } } }
[ℹ]  checking cluster stack for missing resources
[ℹ]  cluster stack has all required resources
[ℹ]  building managed nodegroup stack "eksctl-example-cluster-nodegroup-mng-2"
[ℹ]  deploying stack "eksctl-example-cluster-nodegroup-mng-2"
[ℹ]  no tasks
[✔]  created 0 nodegroup(s) in cluster "example-cluster"
[ℹ]  nodegroup "mng-2" has 2 node(s)
[ℹ]  node "ip-10-0-0-110.ec2.internal" is ready
[ℹ]  node "ip-10-0-0-49.ec2.internal" is ready
[ℹ]  waiting for at least 1 node(s) to become ready in "mng-2"
[ℹ]  nodegroup "mng-2" has 2 node(s)
[ℹ]  node "ip-10-0-0-110.ec2.internal" is ready
[ℹ]  node "ip-10-0-0-49.ec2.internal" is ready
[✔]  created 1 managed nodegroup(s) in cluster "mng-2"
[ℹ]  checking security group configuration for all nodegroups
[ℹ]  all nodegroups have up-to-date configuration
Was this page helpful?
0 / 5 - 0 ratings