Eksctl: No eksctl-managed CloudFormation stacks found error with AWS QuickStart template

Created on 12 Mar 2020  Â·  24Comments  Â·  Source: weaveworks/eksctl

Used AWS QuickStart Cloudformation template to create EKS Cluster. Getting "No eksctl-managed CloudFormation stacks found error" when creating ALB Ingress. Other users have reported the same here -> https://github.com/weaveworks/eksctl/issues/877

Uderlying issue is that AWS Quickstart uses a different cluster name than CloudFormation stack. It would be good to know of some workaround if possible as this is a blocker.

Here’s the command with log trace ->
$ eksctl -v 4 create iamserviceaccount --cluster=EKS-MpOSL3HQwQes --name=alb-ingress-controller --namespace=kube-system --attach-policy-arn=arn:aws:iam::$AWS_ACCOUNT_ID:policy/ALBIngressControllerIAMPolicy
2020-03-12T00:56:19-04:00 [ℹ] eksctl version 0.15.0-rc.2
2020-03-12T00:56:19-04:00 [ℹ] using region us-east-1
2020-03-12T00:56:19-04:00 [â–¶] role ARN for the current session is "arn:aws:iam::6d7737431711:user/User.[email protected]"
2020-03-12T00:56:19-04:00 [â–¶] cluster = {
Arn: "arn:aws:eks:us-east-1:6d7737431711:cluster/EKS-MpOSL3HQwQes",
CertificateAuthority: {
Data: "LS0tLS1CRUdJTFxxxxxxxxxxxx"
},
CreatedAt: 2020-03-11 03:39:12 +0000 UTC,
Endpoint: "https://5C2379BF39A50281F6E0E31C94AFA5C2.gr7.us-east-1.eks.amazonaws.com",
Identity: {
Oidc: {
Issuer: "https://oidc.eks.us-east-1.amazonaws.com/id/5C2379BF39A50281F6E0E31C94AFA5C2"
}
},
Logging: {
ClusterLogging: [{
Enabled: false,
Types: [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
]
}]
},
Name: "EKS-MpOSL3HQwQes",
PlatformVersion: "eks.9",
ResourcesVpcConfig: {
ClusterSecurityGroupId: "sg-xxxx",
EndpointPrivateAccess: false,
EndpointPublicAccess: true,
PublicAccessCidrs: ["0.0.0.0/0"],
SecurityGroupIds: ["sg-xxxx"],
SubnetIds: [
"subnet-xxxx",
"subnet-xxxx",
"subnet-xxxx",
"subnet-xxxx",
"subnet-xxxx",
"subnet-xxxx"
],
VpcId: "vpc-xxxx"
},
RoleArn: "arn:aws:iam::6d7737431711:role/my-org-eks-test-7-EKSStack-UONAI-ControlPlaneRole-M02EQ8WCINDD",
Status: "ACTIVE",
Tags: {

},
Version: "1.14"
}
Error: no eksctl-managed CloudFormation stacks found for "EKS-MpOSL3HQwQes"

I did a quick search and the error is defined here:
pkg/cfn/manager/api.go: return fmt.Errorf("no eksctl-managed CloudFormation stacks found for %q", c.spec.Metadata.Name)

It's because of the convention used to create cluster stack name i.e.
pkg/cfn/manager/cluster.go: return "eksctl-" + c.spec.Metadata.Name + "-cluster"

It would be good if the full stack name can be passed as an argument instead.

closewontfix kinbug

Most helpful comment

A lot of people uses Terraform as standard and you are saying to all the terraform users that they are out of luck. Do you realize the number of customers you are saying you do not care about and your tool is useless to them. Terraform is the leading IAC tool. What good is a tool if more than half of your potential user base cannot use it because of a design decision? I think just closing this and ignoring a large customer base is short sighted and foolish.

All 24 comments

We have released a temporary fix here -> https://github.com/coderiseio/eksctl/releases/tag/0.15.0-1930

Notes:
The pre-built binaries are in dist.zip
Set CLUSTER_STACK_NAME env variable before running the command

I have try to execute the fix version but i get the error that you can see below

env |grep CLUSTER_STACK_NAME
CLUSTER_STACK_NAME=XXX-XXX

./eksctl create nodegroup --config-file=../../../nodegroupSpot1.yaml
[ℹ] eksctl version 0.16.0-dev+82cc8e7e.2020-03-16T12:57:04Z
[ℹ] using region eu-west-1
[ℹ] will use version 1.15 for new nodegroup(s) based on control plane version
cluster stack from ENV in ListStacks XXXXXX
Stacks: [{
CreationTime: 2020-03-11 15:15:46.775 +0000 UTC,
Description: "Amazon EKS XXXXX Template",
DisableRollback: false,
DriftInformation: {
StackDriftStatus: "NOT_CHECKED"
},
EnableTerminationProtection: false,
Outputs: [{
Description: "Security group for the cluster control plane communication with worker nodes",
OutputKey: "SecurityGroups",
OutputValue: "sg-0069908026f93c21f"
},{
Description: "The VPC Id",
OutputKey: "VpcId",
OutputValue: "vpc-03a36bae4e4e82082"
},{
Description: "Subnets IDs in the VPC",
OutputKey: "SubnetIds",
OutputValue: "subnet-0bfca4c8f0f311672,subnet-0d928f85b47caf26b,subnet-0f127dfc0356686e4,subnet-08b02e969f44ddea4"
}],
Parameters: [{
ParameterKey: "VpcBlock",
ParameterValue: "10.8.0.0/16"
}],
RollbackConfiguration: {

},
StackId: "arn:aws:cloudformation:eu-west-1:869279755764:stack/XXXXXX/XXXXXXXX-63ab-11ea-baf6-02c18823f600",
StackName: "XXXXX",
StackStatus: "CREATE_COMPLETE"
}]
Error: getting VPC configuration for cluster "XXXXX": no eksctl-managed CloudFormation stacks found for "XXXXXX"

Hi @coderiseio thank you for reporting this. At the moment, eksctl is not supporting clusters that were not created by eksctl, so this operation is unfortunately not supported.

@joseacampos the cluster you are using was created with eksctl or with another tool?

I'm facing the same issue @martina-if my cluster was made by Terraform. There's any workaround?

Hi @coderiseio thank you for reporting this. At the moment, eksctl is not supporting clusters that were not created by eksctl, so this operation is unfortunately not supported.

@joseacampos the cluster you are using was created with eksctl or with another tool?

@martina-if I think it's a bad practice to hardcode naming prefix/suffix within code. It should allow users to pass the full name as an argument.

@krismorte Unfortunately there is no workaround for this at the moment.

@dev-coderise I think you are right. This was a design choice made a long time ago. The main reason is that eksctl relies on certain assumptions to be able to work and can only manage clusters created with eksctl because of this assumptions. It's a known limitation and there are no plans in the near future to change this as it would potentially require a big rewrite of the core functionality.

A lot of people uses Terraform as standard and you are saying to all the terraform users that they are out of luck. Do you realize the number of customers you are saying you do not care about and your tool is useless to them. Terraform is the leading IAC tool. What good is a tool if more than half of your potential user base cannot use it because of a design decision? I think just closing this and ignoring a large customer base is short sighted and foolish.

@martina-if I think it's a bad practice to hardcode naming prefix/suffix within code. It should allow users to pass the full name as an argument.

@dev-coderise, it's not just the prefix/suffix used to name resources that's limiting eksctl to supporting eksctl-created clusters only. Some commands in eksctl do not call the AWS APIs for the resource in question directly, but rather rely on updating CloudFormation stacks, so the commands require the cluster resources to be created in a certain way (CloudFormation with a certain structure).

A lot of people uses Terraform as standard and you are saying to all the terraform users that they are out of luck. Do you realize the number of customers you are saying you do not care about and your tool is useless to them. Terraform is the leading IAC tool. What good is a tool if more than half of your potential user base cannot use it because of a design decision? I think just closing this and ignoring a large customer base is short sighted and foolish.

@modevops I think it was not @martina-if 's intention to ignore this issue. We do realise a lot of users are on Terraform, and have had requests to support eksctl commands for clusters created using other means (like Terraform). There's a separate issue to track this: https://github.com/weaveworks/eksctl/issues/2174. To help understand your use case better, what types of operations would you like eksctl to support for clusters created outside of eksctl?

Today I ran into this- I'm using terraform as well, but the instructions for deploying an ALB Ingress Controller assume you've used eksctl. This command gives me an error-

eksctl create iamserviceaccount \
       --cluster=$CLUSTER_NAME \
       --namespace=kube-system \
       --name=alb-ingress-controller \
       --attach-policy-arn=$POLICY_ARN \
       --override-existing-serviceaccounts \
       --approve

If eksctl is the official EKS CLI then I think it should work for things like this. Otherwise I think AWS should stop using it in their docs, since those docs then don't work for anyone who isn't using eksctl.

Just hit this today. Our company has been for 1 month into the journey of adopting EKS. Terraform is our IaC platform of choice and I have to say this is very discouraging. Nowhere in the docs so far I read that I had to create my cluster with eks-ctl otherwise I'd have taken than into consideration. Most of the EKS documentation on AWS promotes the usage of eks-ctl but the limitation is never highlighted. If this is a design choice and it forces users to redefine their technology stack and infrastructure management I would have expected this to be communicated upfront.

Same here. Created an EKS cluster via Console.

Ran into exactly same issue, using Terraform to create the cluster, and try to use eksctl to deploy ALB ingress controller - which seems to be the "official" way:
https://aws.amazon.com/blogs/opensource/kubernetes-ingress-aws-alb-ingress-controller/
https://www.eksworkshop.com/beginner/130_exposing-service/ingress_controller_alb/

If this is incompatible the doc SHOULD clearly document this at the beginning, or even before users creating any eks clusters.

So nothing will be done for that ? Eksctl CLI not friendly with Terraform/Infra as code ?

I didn't even use Terraform, I used AWS's own CloudFormation to create my EKS cluster, and I'm having the same issue. Why even let users create a cluster with CloudFormation if you don't plan to support it?

Same issue here, using Spotinst's CloudFormation.
I understand that a rewrite would be difficult to implement, but I think the documentation should warn about this limitation beforehand.

fully agreed with many of the above points; AWS documentation takes you down a path that is just incompatible with most usecases... I somehow doubt most of the EKS clusters in the world are created with eksctl.

Well even following the official installation guide, using eksctl, doesn't seem to work at the moment. I am currently doing so to try to sort out what resources I need to create in terraform to mimic the outcome of using eksctl, but running in to issues just following the main documentation guide.

Of course, it should be noted that the new aws load balancer controller was just announced 3 days ago, so there are probably many things in flux right now.

Ran into the same issue as above comments.

To all who landed here from googling:

If you have created the eks cluster throught AWS console, DO NOT FOLLOW this official documentation starting from step 4.

Instead, follow this guide to create an IAM policy and kubernetes service account.

yeah, to all of this... the rbac-role.yaml is what set up the service account for alb-ingress-controller as early as last week, but even Step 6 in @hcho1989 workaround for that guide links to a moved page

here it is

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
rules:
  - apiGroups:
      - ""
      - extensions
    resources:
      - configmaps
      - endpoints
      - events
      - ingresses
      - ingresses/status
      - services
      - pods/status
    verbs:
      - create
      - get
      - list
      - update
      - watch
      - patch
  - apiGroups:
      - ""
      - extensions
    resources:
      - nodes
      - pods
      - secrets
      - services
      - namespaces
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: alb-ingress-controller
subjects:
  - kind: ServiceAccount
    name: alb-ingress-controller
    namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
  namespace: kube-system
...

i am trying to adapt this to use the new instructions for aws-load-balancer-controller

without the rbac-role applied you get :

kubectl annotate serviceaccount -n kube-system aws-load-balancer-controller eks.amazonaws.com/role-arn=arn:aws:iam::000000000000:role/aws-load-balancer-controller-EKSCluster-c2YVz4RffA5S
Error from server (NotFound): serviceaccounts "aws-load-balancer-controller" not found

i was wondering if there was a Helm chart(pkg) for this simple rbac-role after seeing the option serviceAccount.create= option for eks/aws-load-balancer-controller

helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
        --set clusterName=$(call get-resource-by-name,$(EKS_CLUSTER_STACKNAME),EKSCluster) \
        --set serviceAccount.create=false \
        --set serviceAccount.name=aws-load-balancer-controller \
        --set autoDiscoverAwsRegion=true \
        --set autoDiscoverAwsVpcID=true \
        -n kube-system

bit messy

these helm commands appear to work

    kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"

    helm repo add eks https://aws.github.io/eks-charts

        helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
        --set clusterName=<cluster_name> \
        --set serviceAccount.create=true \
        --set serviceAccount.name=aws-load-balancer-controller \
        --set autoDiscoverAwsRegion=true \
        --set autoDiscoverAwsVpcID=true \
        -n kube-system



kubectl get deployment -n kube-system aws-load-balancer-controller -o yaml

although I could not figure out how to use the option --set serviceAccount.annotations flag...

        --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"="arn:aws:iam::000000000000:role/<role_name>"

https://stackoverflow.com/questions/59632924/how-to-set-annotations-for-a-helm-install

Found this (Refer to To create your service account with the AWS Management Console section ) https://github.com/awsdocs/amazon-eks-user-guide/blob/master/doc_source/create-service-account-iam-policy-and-role.md

you can get EKS-CTL equivalent steps eksctl create iamserviceaccount --cluster= --namespace=kube-system --name=aws-load-balancer-controller --attach-policy-arn=****** --override-existing-serviceaccounts --approve

Found this (Refer to To create your service account with the AWS Management Console section ) https://github.com/awsdocs/amazon-eks-user-guide/blob/master/doc_source/create-service-account-iam-policy-and-role.md

you can get EKS-CTL equivalent steps eksctl create iamserviceaccount --cluster= --namespace=kube-system --name=aws-load-balancer-controller --attach-policy-arn=****** --override-existing-serviceaccounts --approve

Thanks a lot @smakintel, this did really help ! pushed a note at the following link with full steps in case it could help someone else: https://blog.sallah-kokaina.com/aws-alb-load-balancer-controller-no-eksctl-managed-cloudformation-stacks-found-error-ckhj5lwrt00w37ys145db8fbq

I didn't even use Terraform, I used AWS's own CloudFormation to create my EKS cluster, and I'm having the same issue. Why even let users create a cluster with CloudFormation if you don't plan to support it?

I've been creating clusters through aws cloudformation create-stack command, after that eksctl delete cluster wouldn't work either.
Looking at the eksctl source code it turns out that eksctl looks for a particular tag (alpha.eksctl.io/cluster-name) in the stack and if it's missing eksctl fails.
I've created a dummy cluster with eksctl and checked the tags of the created CloudFormation stack.
Adding these tags during cluster stack creation by aws cloudformation create-stack fixed the problem. eksctl delete cluster eventually DID find the cluster and worked.
For the reference here is the command I used to to create the cluster with proper tags:

aws cloudformation create-stack \
    --stack-name $STACK_NAME \
    --tags \
        Key=alpha.eksctl.io/cluster-name,Value=$CLUSTER_NAME \
        Key=eksctl.cluster.k8s.io/v1alpha1/cluster-name,Value=$CLUSTER_NAME \
        Key=alpha.eksctl.io/eksctl-version,Value=0.35.0 \

If the stack has been created without the tags one can fix it similarly by running aws cloudformation update-stack.
Not sure though whether it fixes eksctl commands other than eksctl delete cluster.

Was this page helpful?
0 / 5 - 0 ratings