Eksctl: Cluster creation fails if VPC Endpoint Route already exists in the RouteTable that is associated with a Subnet

Created on 6 Aug 2020  Â·  6Comments  Â·  Source: weaveworks/eksctl

Notice you're getting a completely different error with 0.24 as in your first comment.
First error on master:

AWS::EC2::VPCEndpoint/VPCEndpointS3: CREATE_FAILED – "route table rtb-09ec7cd1e8effc6cf already has a route with destination-prefix-list-id pl-68a54001 (Service: AmazonEC2; Status Code: 400; Error Code: RouteAlreadyExists; Request ID: 9815caa0-c444-46fc-9116-624b749e477a)"

I could replicate the above error, too. I validated it on master.

This error occurs when the S3 VPC Endpoint Route already exists in the RouteTable that is associated with the Subnet.
My fix was not for this error.

_Originally posted by @hiraken-w in https://github.com/weaveworks/eksctl/issues/2473#issuecomment-669691737_

kinbug prioritimportant-longterm

Most helpful comment

@StealthyDev Yes I am creating private cluster as we would like to use that feature of eksctl.Creating public first and then changing to private that is alternative way and it still continues to work as a workaround for now.
@antoinediev Ours is not shared VPC but does have routetable attached to the subnet which I am launching the cluster, also has s3 endpoint gateway attached to it already.

All 6 comments

Hi

For me it still continues to give error about s3 endpoint being route table creation failure.
Our VPC already has all the necessary vpc endpoints created using separate template and not expecting eksctl to create it.

Here's my cluster yaml
I tried with eksctl version 26, 27, 28 same error.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: ml-dev-eks-cluster
region: us-west-2
version: '1.17'

privateCluster:
enabled: true

vpc:
id: "vpc-38c13640"
cidr: "172.20.0.0/18"
subnets:
private:
us-west-2a:
id: "subnet-0d7b1446"
cidr: "172.20.2.0/24"
us-west-2b:
id: "subnet-b4cf8dcd"
cidr: "172.20.18.0/24"

[ℹ] eksctl version 0.28.0-rc.0
[ℹ] using region us-west-2
[✔] using existing VPC (vpc-38c13640) and subnets (private:[subnet-0d7b1446 subnet-b4cf8dcd] public:[])
[!] custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
[ℹ] nodegroup "m5a-xlarge-ng-1" will use "ami-064270ad2910f51a2" [AmazonLinux2/1.17]
[ℹ] using EC2 key pair "dev-keypair"
[ℹ] using Kubernetes version 1.17
[ℹ] creating EKS cluster "ml-dev-eks-cluster" in "us-west-2" region with un-managed nodes
[ℹ] 1 nodegroup (m5a-xlarge-ng-1) was included (based on the include/exclude rules)
[ℹ] will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s)
[ℹ] will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=ml-dev-eks-cluster'
[ℹ] Kubernetes API endpoint access will use provided values {publicAccess=true, privateAccess=true} for cluster "ml-dev-eks-cluster" in "us-west-2"
[ℹ] 2 sequential tasks: { create cluster control plane "ml-dev-eks-cluster", 2 sequential sub-tasks: { 3 sequential sub-tasks: { tag cluster, update CloudWatch logging configuration, update cluster VPC endpoint access configuration }, create nodegroup "m5a-xlarge-ng-1" } }
[ℹ] building cluster stack "eksctl-ml-dev-eks-cluster-cluster"
[ℹ] deploying stack "eksctl-ml-dev-eks-cluster-cluster"
[✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-ml-dev-eks-cluster-cluster"
[ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
[✖] AWS::EC2::SecurityGroup/ControlPlaneSecurityGroup: CREATE_FAILED – "Resource creation cancelled"
[✖] AWS::EC2::SecurityGroup/ClusterSharedNodeSecurityGroup: CREATE_FAILED – "Resource creation cancelled"
[✖] AWS::EC2::VPCEndpoint/VPCEndpointS3: CREATE_FAILED – "route table rtb-1e944065 already has a route with destination-prefix-list-id pl-68a54001 (Service: AmazonEC2; Status Code: 400; Error Code: RouteAlreadyExists; Request ID: b46e5452-d6ee-44a3-bc84-0a0ed756d589; Proxy: null)"
[!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
[ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=ml-dev-eks-cluster'
[✖] waiting for CloudFormation stack "eksctl-ml-dev-eks-cluster-cluster": ResourceNotReady: failed waiting for successful resource state

Thanks
Lucky

Hi

For me it still continues to give error about s3 endpoint being route table creation failure.
Our VPC already has all the necessary vpc endpoints created using separate template and not expecting eksctl to create it.

Here's my cluster yaml
I tried with eksctl version 26, 27, 28 same error.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: ml-dev-eks-cluster
region: us-west-2
version: '1.17'

privateCluster:
enabled: true

vpc:
id: "vpc-38c13640"
cidr: "172.20.0.0/18"
subnets:
private:
us-west-2a:
id: "subnet-0d7b1446"
cidr: "172.20.2.0/24"
us-west-2b:
id: "subnet-b4cf8dcd"
cidr: "172.20.18.0/24"

[ℹ] eksctl version 0.28.0-rc.0
[ℹ] using region us-west-2
[✔] using existing VPC (vpc-38c13640) and subnets (private:[subnet-0d7b1446 subnet-b4cf8dcd] public:[])
[!] custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
[ℹ] nodegroup "m5a-xlarge-ng-1" will use "ami-064270ad2910f51a2" [AmazonLinux2/1.17]
[ℹ] using EC2 key pair "dev-keypair"
[ℹ] using Kubernetes version 1.17
[ℹ] creating EKS cluster "ml-dev-eks-cluster" in "us-west-2" region with un-managed nodes
[ℹ] 1 nodegroup (m5a-xlarge-ng-1) was included (based on the include/exclude rules)
[ℹ] will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s)
[ℹ] will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=ml-dev-eks-cluster'
[ℹ] Kubernetes API endpoint access will use provided values {publicAccess=true, privateAccess=true} for cluster "ml-dev-eks-cluster" in "us-west-2"
[ℹ] 2 sequential tasks: { create cluster control plane "ml-dev-eks-cluster", 2 sequential sub-tasks: { 3 sequential sub-tasks: { tag cluster, update CloudWatch logging configuration, update cluster VPC endpoint access configuration }, create nodegroup "m5a-xlarge-ng-1" } }
[ℹ] building cluster stack "eksctl-ml-dev-eks-cluster-cluster"
[ℹ] deploying stack "eksctl-ml-dev-eks-cluster-cluster"
[✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-ml-dev-eks-cluster-cluster"
[ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
[✖] AWS::EC2::SecurityGroup/ControlPlaneSecurityGroup: CREATE_FAILED – "Resource creation cancelled"
[✖] AWS::EC2::SecurityGroup/ClusterSharedNodeSecurityGroup: CREATE_FAILED – "Resource creation cancelled"
[✖] AWS::EC2::VPCEndpoint/VPCEndpointS3: CREATE_FAILED – "route table rtb-1e944065 already has a route with destination-prefix-list-id pl-68a54001 (Service: AmazonEC2; Status Code: 400; Error Code: RouteAlreadyExists; Request ID: b46e5452-d6ee-44a3-bc84-0a0ed756d589; Proxy: null)"
[!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
[ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=ml-dev-eks-cluster'
[✖] waiting for CloudFormation stack "eksctl-ml-dev-eks-cluster-cluster": ResourceNotReady: failed waiting for successful resource state

Thanks
Lucky

I think you are doing Private Cluster, to get started remove private and get comfortable. Later add it. (you can still put your resources in private subnet)

privateCluster:
  enabled: true

For private clusters, the cloudformation stack adds a lot of VPC Endpoints, we have to manually add our private subnets in those VPC endpoints.

Hi,

I reproduce this error when I try to create a private cluster on a shared VPC with the RouteTables already present :
AWS::EC2::VPCEndpoint/VPCEndpointS3: CREATE_FAILED – "route table rtb-xxxxxx already has a route with destination-prefix-list-id pl-23ad484a (Service: AmazonEC2; Status Code: 400; Error Code: RouteAlreadyExists)"

@StealthyDev Yes I am creating private cluster as we would like to use that feature of eksctl.Creating public first and then changing to private that is alternative way and it still continues to work as a workaround for now.
@antoinediev Ours is not shared VPC but does have routetable attached to the subnet which I am launching the cluster, also has s3 endpoint gateway attached to it already.

Can reproduce with eksctl 0.35.0 by trying to spin up an additional private EKS cluster (privateCluster enabled:true).
Essentially this means we can not reproduce the same cluster definition file indefinitely.

Same issue. @lkr2des how are you using the workaround? I'm able to create the public cluster and I try to update my cluster to a private one afterwards by updating the cluster.yaml but I still have the same issue.

This worked eksctl utils update-cluster-endpoints --name=test-cluster --private-access=true --public-access=false --region but it's not really what I want. It will cause a sort of drift I suppose with the CF.

Was this page helpful?
0 / 5 - 0 ratings