What happened?
I'm trying to view the nodegroups so I can upgrade them to a newer kubernetes version. I get this error:
Error: getting nodegroup stack summaries: failed to find a nodegroup tag (alpha.eksctl.io/nodegroup-name)
What you expected to happen?
View the nodegroups like it used to show.
How to reproduce it?
Run eksctl get nodegroup --cluster production against an old EKS cluster using the latest eksctl.
Anything else we need to know?
I created the cluster long ago with eksctl and now need to upgrade it before May 11th due to AWS deprecating older versions of kubernetes on EKS.
It also might be related to this open issue: https://github.com/weaveworks/eksctl/issues/1578
Versions
Please paste in the output of these commands:
$ eksctl version
0.18.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.10-eks-aae39f", GitCommit:"aae39f4697508697bf16c0de4a5687d464f4da81", GitTreeState:"clean", BuildDate:"2019-12-23T08:19:12Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Logs
```sh
eksctl get nodegroup --cluster production
Error: getting nodegroup stack summaries: failed to find a nodegroup tag (alpha.eksctl.io/nodegroup-name)
@montanaflynn because it is not May 11 yet the current eksctl ought to support 1.12 clusters. What version did you find still worked?
The AWS upgrade procedure says "This procedure assumes ... that your eksctl version is at least 0.19.0" but that version seems incompatible with clusters created with 1.12 versions of eksctl.
@whereisaaron for one cluster I didn't try to find a version that worked, I just deleted the ec2 autoscaling group manually and used the AWS console to create managed nodegroups going forward. The other cluster I did upgrade the cluster to 1.13 so I have until June 30th to upgrade:
> eksctl update cluster --name production --approve
[ℹ] eksctl version 0.18.0
[ℹ] using region us-east-1
[ℹ] will upgrade cluster "production" control plane from current version "1.12" to "1.13"
[✔] cluster "production" control plane has been upgraded to version "1.13"
[ℹ] you will need to follow the upgrade procedure for all of nodegroups and add-ons
[ℹ] re-building cluster stack "eksctl-production-cluster"
[ℹ] updating stack to add new resources [] and outputs [FeatureNATMode]
[ℹ] checking security group configuration for all nodegroups
[ℹ] all nodegroups have up-to-date configuration
and got the same error:
> eksctl get nodegroup --cluster production
Error: getting nodegroup stack summaries: failed to find a nodegroup tag (alpha.eksctl.io/nodegroup-name)
It's odd because the label does exist on the nodes:
> kubectl describe node ip-xxx.xxx.xxx.xxx.ec2.internal 2847ms < Thu May 7 14:16:17 2020
Name: ip-xxx.xxx.xxx.xxx.ec2.internal
Roles: <none>
Labels: alpha.eksctl.io/cluster-name=staging
alpha.eksctl.io/instance-id=i-xxxxxxxx
alpha.eksctl.io/nodegroup-name=ng-xxxxx
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=t3.medium
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1f
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-xxx.xxx.xxx.xxx.ec2.internal
kubernetes.io/os=linux
@montanaflynn I tested with a nodegroup that definitely has the tag on the instances and the k8s label alpha.eksctl.io/nodegroup-name but I still get the error from ekctl 0.19.0.
I wonder if it is because there in a nodegroup scaled to zero. It seems strange. Still you upgraded successfully with 0.18.0 by the look of it?
Hit me up on Slack @whereisaaron if you want to compare notes.
So I can confirm having a nodegroup scaled to zero is one trigger for this error. Before 0.18.0 it gave the error for update cluster and get nodegroups.
$ eksctl update cluster --name foo
[ℹ] eksctl version 0.18.0
[ℹ] using region ap-southeast-2
[ℹ] (plan) would upgrade cluster "foo" control plane from current version "1.12" to "1.13"
[ℹ] re-building cluster stack "eksctl-foo-cluster"
[ℹ] (plan) updating stack to add new resources [] and outputs []
[ℹ] checking security group configuration for all nodegroups
[✖] failed checking nodegroups
%!!(MISSING)(EXTRA string=failed to find a nodegroup tag (alpha.eksctl.io/nodegroup-name))[!] no changes were applied, run again with '--approve' to apply the changes
After deleting the nodegroups that were scaled to zero. the error disappeared.
$ eksctl update cluster --name foo
[ℹ] eksctl version 0.18.0
[ℹ] using region ap-southeast-2
[ℹ] (plan) would upgrade cluster "foo" control plane from current version "1.12" to "1.13"
[ℹ] re-building cluster stack "eksctl-foo-cluster"
[ℹ] (plan) updating stack to add new resources [] and outputs []
[ℹ] checking security group configuration for all nodegroups
[ℹ] all nodegroups have up-to-date configuration
[!] no changes were applied, run again with '--approve' to apply the changes
@whereisaaron I didn't get an error when I updated the cluster, only when getting the nodegroups.
I do have some nodegroups scaled to zero so I'll see if scaling them to 1 resolves the error.
Thanks for your help, what slack are you on?
I think I found you on the Kubernetes Slack?

After deleting the scaled down to zero node group, the upgrade and get nodegroup is working without error. Upgrade took about 25 minutes.
$ eksctl update cluster --name foo --approve
[ℹ] eksctl version 0.18.0
[ℹ] using region ap-southeast-2
[ℹ] will upgrade cluster "cave" control plane from current version "1.12" to "1.13"
[✔] cluster "foo" control plane has been upgraded to version "1.13"
[ℹ] you will need to follow the upgrade procedure for all of nodegroups and add-ons
[ℹ] re-building cluster stack "eksctl-foo-cluster"
[ℹ] updating stack to add new resources [] and outputs []
[ℹ] nothing to update
[ℹ] checking security group configuration for all nodegroups
[ℹ] all nodegroups have up-to-date configuration
Weirdly I didn't have an issue with updating the cluster:
https://github.com/weaveworks/eksctl/issues/2148#issuecomment-625832350
How did you go about deleting the nodegroup? I would like to delete it using eksctl instead of from aws console where I might miss something. The reason I wanted to get the nodegroups was so I can delete them after I create new ones and move everything to them.
And yeah that's me on the kubernetes slack.
Possibly since you ran with --approve it didn't try that nodegroup check?
I deleted the nodegroup using eksctl 0.7.0 since that is what I used to create it. It may not matter if it is just draining the nodes and deleting the CF stack.
Thanks a lot @whereisaaron!
I also realized that when we added the code for managed nodegroups we forgot to check for legacy tags in pkg/cfn/manager/nodegroup.go:GetNodeGroupType() so let's keep this issue open and fix that.
@montanaflynn out of curiosity did you also check if the tag existed in the nodegroup stack itself in CF? This is what eksctl is expecting.
@martina-if the tag does exist on the cloudformation nodegroup stack as well
I built a 1.14 EKS cluster with eksctl version 0.18.0.
I am now using eksctl version 0.20.0.
This command is failing:
➜ ~ eksctl get nodegroups --cluster=staging-kube
Error: getting nodegroup stack summaries: getting nodegroup stacks: no eksctl-managed CloudFormation stacks found for "staging-kube"
The cluster was created with eksctl:

Do I need to downgrade my eksctl?
Update: Downgrading to 0.18.0 did not work. Maybe I created my cluster with another version. Or is the eksctl version even the problem here?
Hi @mellonbrook, you shouldn't need to. I think the problem is the lack of region. Please try specifying it like this:
eksctl get nodegroups --cluster=staging-kube --region=<region>
I have a 1.11 EKS cluster. When I tried to get nodegroups using eks 0.170.0 with region info, I hit a similar issue:
getting nodegroup stack summaries: failed to find a nodegroup tag (alpha.eksctl.io/nodegroup-name)
I was tried to downgrade the eksctl versions. It didn't work as well. Any thoughts?
Hi @sijie the fix for that problem was released in eksctl 0.21.0 can you try that or a higher version?
Also, I am not sure you will be able to manage that cluster with eksctl since 1.11 and 1.12 where already deprecated :-/
eksctl version
0.26.0
eksctl get cluster
NAME REGION
demo-cl eu-central-1
eksctl get nodegroup --cluster=demo-cl
Error: getting nodegroup stack summaries: getting nodegroup stacks: no eksctl-managed CloudFormation stacks found for "demo-cl"
Hi @raztud can you tell us what version of eksctl you used to create the cluster?
Also, what happens if you run:
eksctl get nodegroup --cluster=demo-cl --region eu-central-1
?
When there are no nodegroups you should get No nodegroups found. If you see that message I think it's because it is not finding the cluster. It might be that it is using another region by default.
I haven't been able to reproduce this issue on a cluster created with eksctl 0.16.0 and retrieved with 0.28.0-dev:
logs
$ eksctl create cluster martina-016
[ℹ] eksctl version 0.16.0
[ℹ] using region eu-north-1
[ℹ] setting availability zones to [eu-north-1c eu-north-1a eu-north-1b]
[ℹ] subnets for eu-north-1c - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for eu-north-1a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for eu-north-1b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup "ng-51be17cd" will use "ami-011c9f6c87115ef00" [AmazonLinux2/1.14]
[ℹ] using Kubernetes version 1.14
[ℹ] creating EKS cluster "martina-016" in "eu-north-1" region with un-managed nodes
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-north-1 --cluster=martina-016'
[ℹ] CloudWatch logging will not be enabled for cluster "martina-016" in "eu-north-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=eu-north-1 --cluster=martina-016'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "martina-016" in "eu-north-1"
[ℹ] 2 sequential tasks: { create cluster control plane "martina-016", create nodegroup "ng-51be17cd" }
[ℹ] building cluster stack "eksctl-martina-016-cluster"
[ℹ] deploying stack "eksctl-martina-016-cluster"
[ℹ] building nodegroup stack "eksctl-martina-016-nodegroup-ng-51be17cd"
[ℹ] --nodes-min=2 was set automatically for nodegroup ng-51be17cd
[ℹ] --nodes-max=2 was set automatically for nodegroup ng-51be17cd
[ℹ] deploying stack "eksctl-martina-016-nodegroup-ng-51be17cd"
[✔] all EKS cluster resources for "martina-016" have been created
[✔] saved kubeconfig as "/home/martina/.kube/config"
[ℹ] adding identity "arn:aws:iam::123:role/eksctl-martina-016-nodegroup-ng-5-NodeInstanceRole-VF63QAJC83AY" to auth ConfigMap
[ℹ] nodegroup "ng-51be17cd" has 0 node(s)
[ℹ] waiting for at least 2 node(s) to become ready in "ng-51be17cd"
[ℹ] nodegroup "ng-51be17cd" has 2 node(s)
[ℹ] node "ip-192-168-55-37.eu-north-1.compute.internal" is ready
[ℹ] node "ip-192-168-80-35.eu-north-1.compute.internal" is ready
[ℹ] kubectl command should work with "/home/martina/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "martina-016" in "eu-north-1" region is ready
$ eksctl version
0.28.0-dev+f19da6e8.2020-09-10T10:16:32Z
$ eksctl get cluster
NAME REGION
martina-016 eu-north-1
$ eksctl get nodegroup --cluster martina-016
CLUSTER NODEGROUP CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID
martina-016 ng-51be17cd 2020-09-10T07:57:59Z 2 2 0 m5.large ami-011c9f6c87115ef00
$ eksctl delete nodegroup ng-51be17cd --cluster martina-016
[ℹ] eksctl version 0.28.0-dev+f19da6e8.2020-09-10T10:16:32Z
[ℹ] using region eu-north-1
[ℹ] 1 nodegroup (ng-51be17cd) was included (based on the include/exclude rules)
[ℹ] will drain 1 nodegroup(s) in cluster "martina-016"
[ℹ] cordon node "ip-192-168-55-37.eu-north-1.compute.internal"
[ℹ] cordon node "ip-192-168-80-35.eu-north-1.compute.internal"
[!] ignoring DaemonSet-managed Pods: kube-system/aws-node-qkjnk, kube-system/kube-proxy-zdxs8
[!] ignoring DaemonSet-managed Pods: kube-system/aws-node-wmh7g, kube-system/kube-proxy-xkbn2
[!] ignoring DaemonSet-managed Pods: kube-system/aws-node-wmh7g, kube-system/kube-proxy-xkbn2
[✔] drained nodes: [ip-192-168-55-37.eu-north-1.compute.internal ip-192-168-80-35.eu-north-1.compute.internal]
[ℹ] will delete 1 nodegroups from cluster "martina-016"
[!] retryable error (RequestError: send request failed
caused by: Post "https://cloudformation.eu-north-1.amazonaws.com/": EOF) from cloudformation/ListStacks - will retry after delay of 56.988494ms
[ℹ] 1 task: { delete nodegroup "ng-51be17cd" [async] }
[ℹ] will delete stack "eksctl-martina-016-nodegroup-ng-51be17cd"
[ℹ] will delete 1 nodegroups from auth ConfigMap in cluster "martina-016"
[ℹ] removing identity "arn:aws:iam::123:role/eksctl-martina-016-nodegroup-ng-5-NodeInstanceRole-VF63QAJC83AY" from auth ConfigMap (username = "system:node:{{EC2PrivateDNSName}}", groups = ["system:bootstrappers" "system:nodes"])
[✔] deleted 1 nodegroup(s) from cluster "martina-016"
$ eksctl get nodegroup --cluster martina-016
No nodegroups found
Most helpful comment
Hi @mellonbrook, you shouldn't need to. I think the problem is the lack of region. Please try specifying it like this: