Eksctl: Add Additional security groups to Cluster

Created on 11 Feb 2020  Â·  11Comments  Â·  Source: weaveworks/eksctl

Why do you want this feature?
My cluster end point is private API server endpoint and which is accessible only with in VPC and i want to add Additional security groups to the cluster during the Cluster setup so that i will be able administer the Cluster and EKS through my bastion host. I've enabled private end point access but stuck where i can't assign more security groups to the cluster to allow access from Jump-box.

What feature/behavior/change do you want?
Add Additional security groups to Cluster Controlplane

kinfeature prioritimportant-longterm

Most helpful comment

Same problem here, I would like to automate the cluster creation with a gitlab runner running inside the same VPC. But when eksctl verify if control plane is available, the request timed out since the Cluster SG does not add the VPC CIDR in inbound access..

All 11 comments

# An example of ClusterConfig object using an existing VPC:
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: SP-NP-EKS
  region: us-east-1
  securityGroups:
      withShared: true
      withLocal: true
      attachIDs: ['sg-05c1f719382ab9279', 'sg-03305b8b8df813ede']
vpc:
  id: vpc-0e08dbb476ac7febc
  cidr: "10.0.0.0/16"
  subnets:
    # must provide 'private' and/or 'public' subnets by availibility zone as shown
    private:
      us-east-1c:
        id: "subnet-034b18d67e4348ac7"
        cidr: "10.0.2.0/24"

      us-east-1b:
        id: "subnet-00b7afab42694ebfd"
        cidr: "10.0.1.0/24"
    public:
      us-east-1a:
        id: "subnet-0905b9d19d87a0369"
        cidr: "10.0.0.0/24"

      us-east-1d:
        id: "subnet-0e2b092075fe64174"
        cidr: "10.0.3.0/24"
nodeGroups:
  - name: Frontend-ng
    instanceType: m5.xlarge
    desiredCapacity: 2
    maxSize: 10
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonDocDBReadOnlyAccess
       withAddonPolicies:
        autoScaler: true
    amiFamily: AmazonLinux2
    labels:
      nodegroup-type: Frontend
      instance-type: onDemand
    ssh:
      publicKeyName: EKSTESTING
    privateNetworking: true
    securityGroups:
      withShared: true
      withLocal: true
      attachIDs: ['sg-05c1f719382ab9279', 'sg-03305b8b8df813ede']
    preBootstrapCommands:
         - "#!/bin/bash"
         - "aws s3 cp s3://sp-cf-templates-us-east-1/NPRD/bootstrap/BootStrap.sh /root/BootStrap.sh"
         - "cat -v /root/BootStrap.sh | sed -e 's/\\^M//g' > /root/BootStrap1.sh"
         - "mv -f /root/BootStrap1.sh /root/BootStrap.sh"
         - "chmod +x /root/BootStrap.sh"
         - "sh /root/BootStrap.sh SP-NP-EKS-NG >> /root/log.txt 2>&1"

:+1:

Having the same case, eksctl doc stating

You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your connected network.

But looks like there is no way to do that with eksctl or in cluster.yaml

yeah you are correct i am adding the rules to security groups of cluster once it is provisioned. i am looking for a solution where i can add the rules during the cluster by assigning existing security groups to control plane. it is possible when i provision the cluster using cloud formation template but not with eksctl....

@shashidharrao actually I found a way to do that, but you have to create SG before you creating cluster.

If you will specify that SG id in your cluster.yaml, option vpc.securityGroup, eksctl will not create its own SG to control access to ControlPlane, but use one you specified.

In AWS Console you will see that SG as Additional security groups

+1 for the end result requested here. I would also like the possibility of appending to the auto-created ControlPlane SG with a list of CIDRs from the config yaml, i.e.:

  • 10.0.0.0/8
  • 192.168.1.0/24

and then those CIDRs are just appended/added to the created SG. No need to manage a separate group then, and everything is done in code.

Same problem here, I would like to automate the cluster creation with a gitlab runner running inside the same VPC. But when eksctl verify if control plane is available, the request timed out since the Cluster SG does not add the VPC CIDR in inbound access..

vpc.securityGroup worked fine for me but i need to create one inbound rule to access the endpoint api from my VPC after cluster setup.

Hi! We're also facing this problem, after many readings we thought EKS's private access would allow us to do this.
In their docs they state:

Kubernetes API requests within your cluster's VPC (such as worker node to control plane communication) use the private VPC endpoint.

Then we found ourselves with this issue. We worked around this by adding with aws-cli the inboud rule in the Additional security group with our VPC CIDR. But we think this could really come better being part of eksctl

This issue should address https://github.com/weaveworks/eksctl/issues/2627 as well

For anyone who wants to automate this:

You can only add an Additional Security Group via passing the YAML config file, not in-line, and if you want to automate custom parameters you could use sed replacement on a template .yaml file.

See here for an example: https://github.com/weaveworks/eksctl/issues/2961

Was this page helpful?
0 / 5 - 0 ratings