What happened?
eksctl create nodegroup --cluster=kube-test --name=kube-nodes --nodes=1 --full-ecr-access --node-type t3.medium --node-ami=ami-09c3eb35bb3be46a4 --region=ap-south-1 --node-private-networking --node-ami-family=Ubuntu1804
got following error hence stack was rolled back.
[✖] AWS::AutoScaling::AutoScalingGroup/NodeGroup: CREATE_FAILED – "API: autoscaling:CreateAutoScalingGroup You are not authorized to use launch template: eksctl-kube-test-nodegroup-kube-nodes"
What you expected to happen?
All the neccessary ec2 iam roles have been assigned...
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"iam:CreateRole",
"iam:DeleteRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:ListInstanceProfiles",
"iam:AddRoleToInstanceProfile",
"autoscaling:CreateLaunchConfiguration",
"iam:ListInstanceProfilesForRole",
"iam:PassRole",
"iam:DetachRolePolicy",
"iam:DeleteRolePolicy",
"autoscaling:UpdateAutoScalingGroup",
"ec2:DeleteInternetGateway",
"iam:GetRolePolicy",
"autoscaling:CreateAutoScalingGroup"
],
"Resource": [
"arn:aws:ec2:*:*:internet-gateway/*",
"arn:aws:iam::899249751508:instance-profile/eksctl-*",
"arn:aws:iam::899249751508:role/eksctl-*",
"arn:aws:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/*",
"arn:aws:autoscaling:*:*:launchConfiguration:*:launchConfigurationName/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingNotificationTypes",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:DeleteSubnet",
"autoscaling:DescribeScalingProcessTypes",
"autoscaling:DescribePolicies",
"ec2:AttachInternetGateway",
"ec2:DeleteRouteTable",
"ec2:AssociateRouteTable",
"ec2:DescribeInternetGateways",
"autoscaling:DescribeAdjustmentTypes",
"ec2:GetLaunchTemplateData",
"autoscaling:DescribeAutoScalingGroups",
"ec2:CreateRoute",
"ec2:CreateInternetGateway",
"ec2:RevokeSecurityGroupEgress",
"autoscaling:UpdateAutoScalingGroup",
"ec2:DeleteInternetGateway",
"autoscaling:DescribeNotificationConfigurations",
"ec2:DescribeRouteTables",
"ec2:DescribeLaunchTemplates",
"ec2:CreateTags",
"autoscaling:DescribeTags",
"ec2:CreateRouteTable",
"cloudformation:*",
"autoscaling:DescribeMetricCollectionTypes",
"ec2:DetachInternetGateway",
"autoscaling:DescribeLoadBalancers",
"ec2:DisassociateRouteTable",
"autoscaling-plans:*",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteNatGateway",
"autoscaling:DeleteAutoScalingGroup",
"ec2:DeleteVpc",
"eks:*",
"ec2:CreateSubnet",
"ec2:DescribeSubnets",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:DescribeAutoScalingInstances",
"ec2:DescribeAddresses",
"autoscaling:DescribeTerminationPolicyTypes",
"ec2:DeleteTags",
"ec2:CreateNatGateway",
"autoscaling:DescribeLaunchConfigurations",
"ec2:CreateVpc",
"ec2:DescribeVpcAttribute",
"autoscaling:DescribeScalingActivities",
"autoscaling:DescribeAccountLimits",
"ec2:CreateSecurityGroup",
"autoscaling:DescribeScheduledActions",
"autoscaling:DescribeLoadBalancerTargetGroups",
"ec2:ModifyVpcAttribute",
"autoscaling:DescribeLifecycleHookTypes",
"ec2:ReleaseAddress",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:DescribeTags",
"ec2:DeleteRoute",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DescribeNatGateways",
"autoscaling:DescribeLifecycleHooks",
"ec2:AllocateAddress",
"ec2:DescribeSecurityGroups",
"autoscaling:CreateLaunchConfiguration",
"ec2:DescribeImages",
"ec2:CreateLaunchTemplate",
"autoscaling:DeleteLaunchConfiguration",
"ec2:DescribeVpcs",
"ec2:DeleteSecurityGroup"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"autoscaling:CreateLaunchConfiguration",
"ec2:DeleteLaunchTemplate",
"ec2:ModifyLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions",
"ec2:CreateLaunchTemplateVersion"
],
"Resource": [
"arn:aws:autoscaling:*:*:launchConfiguration:*:launchConfigurationName/*",
"arn:aws:ec2:*:*:launch-template/*"
]
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": [
"arn:aws:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/*",
"arn:aws:autoscaling:*:*:launchConfiguration:*:launchConfigurationName/*"
]
}
]
}
$ eksctl version
version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.1.33"}
$ uname -a
Linux ip-172-21-114-125 4.15.0-1039-aws #41-Ubuntu SMP Wed May 8 10:43:54 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Logs
2019-06-03T11:49:26Z [ℹ] using region ap-south-1
2019-06-03T11:49:27Z [â–¶] role ARN for the current session is "arn:aws:sts::899249751508:assumed-role/dml-common-eks-jump-host/i-0a8110b3ebe6aa07a"
2019-06-03T11:49:27Z [â–¶] VPC CIDR (192.168.0.0/16) was divided into 8 subnets [192.168.0.0/19 192.168.32.0/19 192.168.64.0/19 192.168.96.0/19 192.168.128.0/19 192.168.160.0/19 192.168.192.0/19 192.168.224.0/19]
2019-06-03T11:49:27Z [ℹ] subnets for ap-south-1b - public:192.168.0.0/19 private:192.168.64.0/19
2019-06-03T11:49:27Z [ℹ] subnets for ap-south-1a - public:192.168.32.0/19 private:192.168.96.0/19
2019-06-03T11:49:27Z [ℹ] nodegroup "test-k8s-nodes" will use "ami-09c3eb35bb3be46a4" [Ubuntu1804/1.12]
2019-06-03T11:49:27Z [ℹ] creating EKS cluster "kube-test-2" in "ap-south-1" region
2019-06-03T11:49:27Z [â–¶] cfg.json = \
{
"kind": "ClusterConfig",
"apiVersion": "eksctl.io/v1alpha5",
"metadata": {
"name": "kube-test-2",
"region": "ap-south-1",
"version": "1.12"
},
"iam": {},
"vpc": {
"cidr": "192.168.0.0/16",
"subnets": {
"private": {
"ap-south-1a": {
"cidr": "192.168.96.0/19"
},
"ap-south-1b": {
"cidr": "192.168.64.0/19"
}
},
"public": {
"ap-south-1a": {
"cidr": "192.168.32.0/19"
},
"ap-south-1b": {
"cidr": "192.168.0.0/19"
}
}
},
"autoAllocateIPv6": false
},
"nodeGroups": [
{
"name": "test-k8s-nodes",
"ami": "ami-09c3eb35bb3be46a4",
"amiFamily": "Ubuntu1804",
"instanceType": "t3.medium",
"privateNetworking": true,
"securityGroups": {
"withShared": true,
"withLocal": true
},
"desiredCapacity": 1,
"volumeSize": 0,
"volumeType": "gp2",
"volumeName": "/dev/xvda",
"labels": {
"alpha.eksctl.io/cluster-name": "kube-test-2",
"alpha.eksctl.io/nodegroup-name": "test-k8s-nodes"
},
"ssh": {
"allow": false
},
"iam": {
"withAddonPolicies": {
"imageBuilder": true,
"autoScaler": true,
"externalDNS": false,
"appMesh": false,
"ebs": false,
"fsx": false,
"efs": false,
"albIngress": false,
"xRay": false
}
}
}
],
"availabilityZones": [
"ap-south-1b",
"ap-south-1a"
]
}
2019-06-03T11:49:27Z [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
2019-06-03T11:49:27Z [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-south-1 --name=kube-test-2'
2019-06-03T11:49:27Z [ℹ] 2 sequential tasks: { create cluster control plane "kube-test-2", create nodegroup "test-k8s-nodes" }
2019-06-03T11:49:27Z [â–¶] started task: create cluster control plane "kube-test-2"
2019-06-03T11:49:27Z [ℹ] building cluster stack "eksctl-kube-test-2-cluster"
2019-06-03T11:49:27Z [â–¶] CreateStackInput = {
Capabilities: ["CAPABILITY_IAM"],
StackName: "eksctl-kube-test-2-cluster",
Tags: [{
Key: "alpha.eksctl.io/cluster-name",
Value: "kube-test-2"
},{
Key: "eksctl.cluster.k8s.io/v1alpha1/cluster-name",
Value: "kube-test-2"
}],
TemplateBody: "{\"AWSTemplateFormatVersion\":\"2010-09-09\",\"Description\":\"EKS cluster (dedicated VPC: true, dedicated IAM: true) [created and managed by eksctl]\",\"Resources\":{\"ClusterSharedNodeSecurityGroup\":{\"Type\":\"AWS::EC2::SecurityGroup\",\"Properties\":{\"GroupDescription\":\"Communication between all nodes in the cluster\",\"Tags\":[{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/ClusterSharedNodeSecurityGroup\"}}],\"VpcId\":{\"Ref\":\"VPC\"}}},\"ControlPlane\":{\"Type\":\"AWS::EKS::Cluster\",\"Properties\":{\"Name\":\"kube-test-2\",\"ResourcesVpcConfig\":{\"SecurityGroupIds\":[{\"Ref\":\"ControlPlaneSecurityGroup\"}],\"SubnetIds\":[{\"Ref\":\"SubnetPublicAPSOUTH1B\"},{\"Ref\":\"SubnetPublicAPSOUTH1A\"},{\"Ref\":\"SubnetPrivateAPSOUTH1B\"},{\"Ref\":\"SubnetPrivateAPSOUTH1A\"}]},\"RoleArn\":{\"Fn::GetAtt\":\"ServiceRole.Arn\"},\"Version\":\"1.12\"}},\"ControlPlaneSecurityGroup\":{\"Type\":\"AWS::EC2::SecurityGroup\",\"Properties\":{\"GroupDescription\":\"Communication between the control plane and worker nodegroups\",\"Tags\":[{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/ControlPlaneSecurityGroup\"}}],\"VpcId\":{\"Ref\":\"VPC\"}}},\"IngressInterNodeGroupSG\":{\"Type\":\"AWS::EC2::SecurityGroupIngress\",\"Properties\":{\"Description\":\"Allow nodes to communicate with each other (all ports)\",\"FromPort\":0,\"GroupId\":{\"Ref\":\"ClusterSharedNodeSecurityGroup\"},\"IpProtocol\":\"-1\",\"SourceSecurityGroupId\":{\"Ref\":\"ClusterSharedNodeSecurityGroup\"},\"ToPort\":65535}},\"InternetGateway\":{\"Type\":\"AWS::EC2::InternetGateway\",\"Properties\":{\"Tags\":[{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/InternetGateway\"}}]}},\"NATGateway\":{\"Type\":\"AWS::EC2::NatGateway\",\"Properties\":{\"AllocationId\":{\"Fn::GetAtt\":\"NATIP.AllocationId\"},\"SubnetId\":{\"Ref\":\"SubnetPublicAPSOUTH1B\"},\"Tags\":[{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/NATGateway\"}}]}},\"NATIP\":{\"Type\":\"AWS::EC2::EIP\",\"Properties\":{\"Domain\":\"vpc\"}},\"PolicyCloudWatchMetrics\":{\"Type\":\"AWS::IAM::Policy\",\"Properties\":{\"PolicyDocument\":{\"Statement\":[{\"Action\":[\"cloudwatch:PutMetricData\"],\"Effect\":\"Allow\",\"Resource\":\"\"}],\"Version\":\"2012-10-17\"},\"PolicyName\":{\"Fn::Sub\":\"${AWS::StackName}-PolicyCloudWatchMetrics\"},\"Roles\":[{\"Ref\":\"ServiceRole\"}]}},\"PolicyNLB\":{\"Type\":\"AWS::IAM::Policy\",\"Properties\":{\"PolicyDocument\":{\"Statement\":[{\"Action\":[\"elasticloadbalancing:\",\"ec2:CreateSecurityGroup\",\"ec2:Describe\"],\"Effect\":\"Allow\",\"Resource\":\"\"}],\"Version\":\"2012-10-17\"},\"PolicyName\":{\"Fn::Sub\":\"${AWS::StackName}-PolicyNLB\"},\"Roles\":[{\"Ref\":\"ServiceRole\"}]}},\"PrivateRouteTable\":{\"Type\":\"AWS::EC2::RouteTable\",\"Properties\":{\"Tags\":[{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/PrivateRouteTable\"}}],\"VpcId\":{\"Ref\":\"VPC\"}}},\"PrivateSubnetRoute\":{\"Type\":\"AWS::EC2::Route\",\"Properties\":{\"DestinationCidrBlock\":\"0.0.0.0/0\",\"NatGatewayId\":{\"Ref\":\"NATGateway\"},\"RouteTableId\":{\"Ref\":\"PrivateRouteTable\"}}},\"PublicRouteTable\":{\"Type\":\"AWS::EC2::RouteTable\",\"Properties\":{\"Tags\":[{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/PublicRouteTable\"}}],\"VpcId\":{\"Ref\":\"VPC\"}}},\"PublicSubnetRoute\":{\"Type\":\"AWS::EC2::Route\",\"Properties\":{\"DestinationCidrBlock\":\"0.0.0.0/0\",\"GatewayId\":{\"Ref\":\"InternetGateway\"},\"RouteTableId\":{\"Ref\":\"PublicRouteTable\"}}},\"RouteTableAssociationPrivateAPSOUTH1A\":{\"Type\":\"AWS::EC2::SubnetRouteTableAssociation\",\"Properties\":{\"RouteTableId\":{\"Ref\":\"PrivateRouteTable\"},\"SubnetId\":{\"Ref\":\"SubnetPrivateAPSOUTH1A\"}}},\"RouteTableAssociationPrivateAPSOUTH1B\":{\"Type\":\"AWS::EC2::SubnetRouteTableAssociation\",\"Properties\":{\"RouteTableId\":{\"Ref\":\"PrivateRouteTable\"},\"SubnetId\":{\"Ref\":\"SubnetPrivateAPSOUTH1B\"}}},\"RouteTableAssociationPublicAPSOUTH1A\":{\"Type\":\"AWS::EC2::SubnetRouteTableAssociation\",\"Properties\":{\"RouteTableId\":{\"Ref\":\"PublicRouteTable\"},\"SubnetId\":{\"Ref\":\"SubnetPublicAPSOUTH1A\"}}},\"RouteTableAssociationPublicAPSOUTH1B\":{\"Type\":\"AWS::EC2::SubnetRouteTableAssociation\",\"Properties\":{\"RouteTableId\":{\"Ref\":\"PublicRouteTable\"},\"SubnetId\":{\"Ref\":\"SubnetPublicAPSOUTH1B\"}}},\"ServiceRole\":{\"Type\":\"AWS::IAM::Role\",\"Properties\":{\"AssumeRolePolicyDocument\":{\"Statement\":[{\"Action\":[\"sts:AssumeRole\"],\"Effect\":\"Allow\",\"Principal\":{\"Service\":[\"eks.amazonaws.com\"]}}],\"Version\":\"2012-10-17\"},\"ManagedPolicyArns\":[\"arn:aws:iam::aws:policy/AmazonEKSServicePolicy\",\"arn:aws:iam::aws:policy/AmazonEKSClusterPolicy\"]}},\"SubnetPrivateAPSOUTH1A\":{\"Type\":\"AWS::EC2::Subnet\",\"Properties\":{\"AvailabilityZone\":\"ap-south-1a\",\"CidrBlock\":\"192.168.96.0/19\",\"Tags\":[{\"Key\":\"kubernetes.io/role/internal-elb\",\"Value\":\"1\"},{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/SubnetPrivateAPSOUTH1A\"}}],\"VpcId\":{\"Ref\":\"VPC\"}}},\"SubnetPrivateAPSOUTH1B\":{\"Type\":\"AWS::EC2::Subnet\",\"Properties\":{\"AvailabilityZone\":\"ap-south-1b\",\"CidrBlock\":\"192.168.64.0/19\",\"Tags\":[{\"Key\":\"kubernetes.io/role/internal-elb\",\"Value\":\"1\"},{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/SubnetPrivateAPSOUTH1B\"}}],\"VpcId\":{\"Ref\":\"VPC\"}}},\"SubnetPublicAPSOUTH1A\":{\"Type\":\"AWS::EC2::Subnet\",\"Properties\":{\"AvailabilityZone\":\"ap-south-1a\",\"CidrBlock\":\"192.168.32.0/19\",\"Tags\":[{\"Key\":\"kubernetes.io/role/elb\",\"Value\":\"1\"},{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/SubnetPublicAPSOUTH1A\"}}],\"VpcId\":{\"Ref\":\"VPC\"}}},\"SubnetPublicAPSOUTH1B\":{\"Type\":\"AWS::EC2::Subnet\",\"Properties\":{\"AvailabilityZone\":\"ap-south-1b\",\"CidrBlock\":\"192.168.0.0/19\",\"Tags\":[{\"Key\":\"kubernetes.io/role/elb\",\"Value\":\"1\"},{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/SubnetPublicAPSOUTH1B\"}}],\"VpcId\":{\"Ref\":\"VPC\"}}},\"VPC\":{\"Type\":\"AWS::EC2::VPC\",\"Properties\":{\"CidrBlock\":\"192.168.0.0/16\",\"EnableDnsHostnames\":true,\"EnableDnsSupport\":true,\"Tags\":[{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/VPC\"}}]}},\"VPCGatewayAttachment\":{\"Type\":\"AWS::EC2::VPCGatewayAttachment\",\"Properties\":{\"InternetGatewayId\":{\"Ref\":\"InternetGateway\"},\"VpcId\":{\"Ref\":\"VPC\"}}}},\"Outputs\":{\"ARN\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::ARN\"}},\"Value\":{\"Fn::GetAtt\":\"ControlPlane.Arn\"}},\"CertificateAuthorityData\":{\"Value\":{\"Fn::GetAtt\":\"ControlPlane.CertificateAuthorityData\"}},\"ClusterStackName\":{\"Value\":{\"Ref\":\"AWS::StackName\"}},\"Endpoint\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::Endpoint\"}},\"Value\":{\"Fn::GetAtt\":\"ControlPlane.Endpoint\"}},\"SecurityGroup\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::SecurityGroup\"}},\"Value\":{\"Ref\":\"ControlPlaneSecurityGroup\"}},\"ServiceRoleARN\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::ServiceRoleARN\"}},\"Value\":{\"Fn::GetAtt\":\"ServiceRole.Arn\"}},\"SharedNodeSecurityGroup\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::SharedNodeSecurityGroup\"}},\"Value\":{\"Ref\":\"ClusterSharedNodeSecurityGroup\"}},\"SubnetsPrivate\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::SubnetsPrivate\"}},\"Value\":{\"Fn::Join\":[\",\",[{\"Ref\":\"SubnetPrivateAPSOUTH1B\"},{\"Ref\":\"SubnetPrivateAPSOUTH1A\"}]]}},\"SubnetsPublic\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::SubnetsPublic\"}},\"Value\":{\"Fn::Join\":[\",\",[{\"Ref\":\"SubnetPublicAPSOUTH1B\"},{\"Ref\":\"SubnetPublicAPSOUTH1A\"}]]}},\"VPC\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::VPC\"}},\"Value\":{\"Ref\":\"VPC\"}}}}"
}
2019-06-03T12:00:29Z [â–¶] done after 11m2.306584745s of waiting for CloudFormation stack "eksctl-kube-test-2-cluster" to reach "CREATE_COMPLETE" status
2019-06-03T12:00:29Z [â–¶] processing stack outputs
2019-06-03T12:00:30Z [â–¶] completed task: create cluster control plane "kube-test-2"
2019-06-03T12:00:30Z [â–¶] started task: create nodegroup "test-k8s-nodes"
2019-06-03T12:00:30Z [â–¶] waiting for 1 parallel tasks to complete
2019-06-03T12:00:30Z [â–¶] started task: create nodegroup "test-k8s-nodes"
2019-06-03T12:00:30Z [ℹ] building nodegroup stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes"
2019-06-03T12:00:30Z [â–¶] user-data = H4sIAAAAAAAA/6xYaXPiON5/n0+hJ9NVT3dljDFXgCpvjS+OcF8JZGsrJdvCFtiSI8mYkO3vvmUbCEn3zGa61i/olvT7X9L/zG9OQGNXcihZY+8qgs4Weog3AYmD4IrFxAnd5pUEJCDvIJMDbMsZgcwdhiPB5QgxCRMuIHGQbFMquGAwKsR2TERc4P5VwrBAT2scIJ4ycigRiIgm+Ld0BQAA2sPsybRa2qI/f5pa7e5oqMJI4jQWvqS
2019-06-03T12:00:30Z [ℹ] --nodes-min=1 was set automatically for nodegroup test-k8s-nodes
2019-06-03T12:00:30Z [ℹ] --nodes-max=1 was set automatically for nodegroup test-k8s-nodes
2019-06-03T12:00:30Z [â–¶] CreateStackInput = {
Capabilities: ["CAPABILITY_IAM"],
StackName: "eksctl-kube-test-2-nodegroup-test-k8s-nodes",
Tags: [
{
Key: "alpha.eksctl.io/cluster-name",
Value: "kube-test-2"
},
{
Key: "eksctl.cluster.k8s.io/v1alpha1/cluster-name",
Value: "kube-test-2"
},
{
Key: "alpha.eksctl.io/nodegroup-name",
Value: "test-k8s-nodes"
},
{
Key: "eksctl.io/v1alpha2/nodegroup-name",
Value: "test-k8s-nodes"
}
],
TemplateBody: "{\"AWSTemplateFormatVersion\":\"2010-09-09\",\"Description\":\"EKS nodes (AMI family: Ubuntu1804, SSH access: false, private networking: true) [created and managed by eksctl]\",\"Resources\":{\"EgressInterCluster\":{\"Type\":\"AWS::EC2::SecurityGroupEgress\",\"Properties\":{\"Description\":\"Allow control plane to communicate with worker nodes in group test-k8s-nodes (kubelet and workload TCP ports)\",\"DestinationSecurityGroupId\":{\"Ref\":\"SG\"},\"FromPort\":1025,\"GroupId\":{\"Fn::ImportValue\":\"eksctl-kube-test-2-cluster::SecurityGroup\"},\"IpProtocol\":\"tcp\",\"ToPort\":65535}},\"EgressInterClusterAPI\":{\"Type\":\"AWS::EC2::SecurityGroupEgress\",\"Properties\":{\"Description\":\"Allow control plane to communicate with worker nodes in group test-k8s-nodes (workloads using HTTPS port, commonly used with extension API servers)\",\"DestinationSecurityGroupId\":{\"Ref\":\"SG\"},\"FromPort\":443,\"GroupId\":{\"Fn::ImportValue\":\"eksctl-kube-test-2-cluster::SecurityGroup\"},\"IpProtocol\":\"tcp\",\"ToPort\":443}},\"IngressInterCluster\":{\"Type\":\"AWS::EC2::SecurityGroupIngress\",\"Properties\":{\"Description\":\"Allow worker nodes in group test-k8s-nodes to communicate with control plane (kubelet and workload TCP ports)\",\"FromPort\":1025,\"GroupId\":{\"Ref\":\"SG\"},\"IpProtocol\":\"tcp\",\"SourceSecurityGroupId\":{\"Fn::ImportValue\":\"eksctl-kube-test-2-cluster::SecurityGroup\"},\"ToPort\":65535}},\"IngressInterClusterAPI\":{\"Type\":\"AWS::EC2::SecurityGroupIngress\",\"Properties\":{\"Description\":\"Allow worker nodes in group test-k8s-nodes to communicate with control plane (workloads using HTTPS port, commonly used with extension API servers)\",\"FromPort\":443,\"GroupId\":{\"Ref\":\"SG\"},\"IpProtocol\":\"tcp\",\"SourceSecurityGroupId\":{\"Fn::ImportValue\":\"eksctl-kube-test-2-cluster::SecurityGroup\"},\"ToPort\":443}},\"IngressInterClusterCP\":{\"Type\":\"AWS::EC2::SecurityGroupIngress\",\"Properties\":{\"Description\":\"Allow control plane to receive API requests from worker nodes in group test-k8s-nodes\",\"FromPort\":443,\"GroupId\":{\"Fn::ImportValue\":\"eksctl-kube-test-2-cluster::SecurityGroup\"},\"IpProtocol\":\"tcp\",\"SourceSecurityGroupId\":{\"Ref\":\"SG\"},\"ToPort\":443}},\"NodeGroup\":{\"Type\":\"AWS::AutoScaling::AutoScalingGroup\",\"Properties\":{\"DesiredCapacity\":\"1\",\"LaunchTemplate\":{\"LaunchTemplateName\":{\"Fn::Sub\":\"${AWS::StackName}\"},\"Version\":{\"Fn::GetAtt\":\"NodeGroupLaunchTemplate.LatestVersionNumber\"}},\"MaxSize\":\"1\",\"MinSize\":\"1\",\"Tags\":[{\"Key\":\"Name\",\"PropagateAtLaunch\":\"true\",\"Value\":\"kube-test-2-test-k8s-nodes-Node\"},{\"Key\":\"kubernetes.io/cluster/kube-test-2\",\"PropagateAtLaunch\":\"true\",\"Value\":\"owned\"},{\"Key\":\"k8s.io/cluster-autoscaler/enabled\",\"PropagateAtLaunch\":\"true\",\"Value\":\"true\"},{\"Key\":\"k8s.io/cluster-autoscaler/kube-test-2\",\"PropagateAtLaunch\":\"true\",\"Value\":\"owned\"}],\"VPCZoneIdentifier\":{\"Fn::Split\":[\",\",{\"Fn::ImportValue\":\"eksctl-kube-test-2-cluster::SubnetsPrivate\"}]}},\"UpdatePolicy\":{\"AutoScalingRollingUpdate\":{\"MaxBatchSize\":\"1\",\"MinInstancesInService\":\"1\"}}},\"NodeGroupLaunchTemplate\":{\"Type\":\"AWS::EC2::LaunchTemplate\",\"Properties\":{\"LaunchTemplateData\":{\"IamInstanceProfile\":{\"Arn\":{\"Fn::GetAtt\":\"NodeInstanceProfile.Arn\"}},\"ImageId\":\"ami-09c3eb35bb3be46a4\",\"InstanceType\":\"t3.medium\",\"NetworkInterfaces\":[{\"AssociatePublicIpAddress\":false,\"DeviceIndex\":0,\"Groups\":[{\"Fn::ImportValue\":\"eksctl-kube-test-2-cluster::SharedNodeSecurityGroup\"},{\"Ref\":\"SG\"}]}],\"UserData\":\"\"},\"LaunchTemplateName\":{\"Fn::Sub\":\"${AWS::StackName}\"}}},\"NodeInstanceProfile\":{\"Type\":\"AWS::IAM::InstanceProfile\",\"Properties\":{\"Path\":\"/\",\"Roles\":[{\"Ref\":\"NodeInstanceRole\"}]}},\"NodeInstanceRole\":{\"Type\":\"AWS::IAM::Role\",\"Properties\":{\"AssumeRolePolicyDocument\":{\"Statement\":[{\"Action\":[\"sts:AssumeRole\"],\"Effect\":\"Allow\",\"Principal\":{\"Service\":[\"ec2.amazonaws.com\"]}}],\"Version\":\"2012-10-17\"},\"ManagedPolicyArns\":[\"arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy\",\"arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy\",\"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPowerUser\"],\"Path\":\"/\"}},\"PolicyAutoScaling\":{\"Type\":\"AWS::IAM::Policy\",\"Properties\":{\"PolicyDocument\":{\"Statement\":[{\"Action\":[\"autoscaling:DescribeAutoScalingGroups\",\"autoscaling:DescribeAutoScalingInstances\",\"autoscaling:DescribeLaunchConfigurations\",\"autoscaling:DescribeTags\",\"autoscaling:SetDesiredCapacity\",\"autoscaling:TerminateInstanceInAutoScalingGroup\",\"ec2:DescribeLaunchTemplateVersions\"],\"Effect\":\"Allow\",\"Resource\":\"*\"}],\"Version\":\"2012-10-17\"},\"PolicyName\":{\"Fn::Sub\":\"${AWS::StackName}-PolicyAutoScaling\"},\"Roles\":[{\"Ref\":\"NodeInstanceRole\"}]}},\"SG\":{\"Type\":\"AWS::EC2::SecurityGroup\",\"Properties\":{\"GroupDescription\":\"Communication between the control plane and worker nodes in group test-k8s-nodes\",\"Tags\":[{\"Key\":\"kubernetes.io/cluster/kube-test-2\",\"Value\":\"owned\"},{\"Key\":\"Name\",\"Value\":{\"Fn::Sub\":\"${AWS::StackName}/SG\"}}],\"VpcId\":{\"Fn::ImportValue\":\"eksctl-kube-test-2-cluster::VPC\"}}}},\"Outputs\":{\"FeatureLocalSecurityGroup\":{\"Value\":true},\"FeaturePrivateNetworking\":{\"Value\":true},\"FeatureSharedSecurityGroup\":{\"Value\":true},\"InstanceProfileARN\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::InstanceProfileARN\"}},\"Value\":{\"Fn::GetAtt\":\"NodeInstanceProfile.Arn\"}},\"InstanceRoleARN\":{\"Export\":{\"Name\":{\"Fn::Sub\":\"${AWS::StackName}::InstanceRoleARN\"}},\"Value\":{\"Fn::GetAtt\":\"NodeInstanceRole.Arn\"}}}}"
}
2019-06-03T12:00:30Z [ℹ] deploying stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes"
2019-06-03T12:00:30Z [â–¶] start waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:00:30Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:00:49Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:01:05Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:01:21Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:01:36Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:01:54Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:02:13Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:02:31Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:02:48Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:03:05Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:03:21Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:03:39Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:03:56Z [â–¶] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status
2019-06-03T12:03:56Z [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes"
2019-06-03T12:03:56Z [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
2019-06-03T12:03:56Z [ℹ] AWS::CloudFormation::Stack/eksctl-kube-test-2-nodegroup-test-k8s-nodes: ROLLBACK_IN_PROGRESS – "The following resource(s) failed to create: [NodeGroup]. . Rollback requested by user."
2019-06-03T12:03:56Z [✖] AWS::AutoScaling::AutoScalingGroup/NodeGroup: CREATE_FAILED – "API: autoscaling:CreateAutoScalingGroup You are not authorized to use launch template: eksctl-kube-test-2-nodegroup-test-k8s-nodes"
2019-06-03T12:03:56Z [ℹ] AWS::AutoScaling::AutoScalingGroup/NodeGroup: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::EC2::LaunchTemplate/NodeGroupLaunchTemplate: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::EC2::LaunchTemplate/NodeGroupLaunchTemplate: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::EC2::LaunchTemplate/NodeGroupLaunchTemplate: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::IAM::InstanceProfile/NodeInstanceProfile: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::IAM::Policy/PolicyAutoScaling: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::IAM::Policy/PolicyAutoScaling: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::IAM::InstanceProfile/NodeInstanceProfile: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::IAM::Policy/PolicyAutoScaling: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::IAM::InstanceProfile/NodeInstanceProfile: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::IAM::Role/NodeInstanceRole: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupEgress/EgressInterClusterAPI: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupIngress/IngressInterCluster: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupEgress/EgressInterCluster: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupIngress/IngressInterClusterAPI: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupIngress/IngressInterClusterCP: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupIngress/IngressInterCluster: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupEgress/EgressInterCluster: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupIngress/IngressInterCluster: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupEgress/EgressInterClusterAPI: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupIngress/IngressInterClusterAPI: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupIngress/IngressInterClusterCP: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupIngress/IngressInterClusterAPI: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupIngress/IngressInterClusterCP: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupEgress/EgressInterCluster: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroupEgress/EgressInterClusterAPI: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroup/SG: CREATE_COMPLETE
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroup/SG: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::IAM::Role/NodeInstanceRole: CREATE_IN_PROGRESS – "Resource creation Initiated"
2019-06-03T12:03:56Z [ℹ] AWS::IAM::Role/NodeInstanceRole: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::EC2::SecurityGroup/SG: CREATE_IN_PROGRESS
2019-06-03T12:03:56Z [ℹ] AWS::CloudFormation::Stack/eksctl-kube-test-2-nodegroup-test-k8s-nodes: CREATE_IN_PROGRESS – "User Initiated"
2019-06-03T12:03:56Z [ℹ] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2019-06-03T12:03:56Z [ℹ] to cleanup resources, run 'eksctl delete cluster --region=ap-south-1 --name=kube-test-2'
2019-06-03T12:03:56Z [✖] waiting for CloudFormation stack "eksctl-kube-test-2-nodegroup-test-k8s-nodes" to reach "CREATE_COMPLETE" status: ResourceNotReady: failed waiting for successful resource state
2019-06-03T12:03:56Z [✖] failed to create cluster "kube-test-2
Hi @usopn, did you copy this role from #204 or defined it yourself?
@errordeveloper i started with role from #204 then kept facing issues and kept adding more roles... finally I added everything that made sense...
I believe the role needs to be adjusted to support launch templates instead of launch configurations.
I can see you added some rules to cater for launch configurations, but I'm not able to visually evaluate the overall set of rules. I'd rather focus on a generalised solution for #204, as things evolve often and it's hard to keep an up-to-date list of rules.
Actually, this is not an IAM issue. The error message says 'not authorized to _use_ launch template', it's about _using it_, not _creating_. You should see that launch template actually gets created. The issue is with launch template itself, and most of the time this comes down to instance types. Some regions/zones don't have t2 and t3 regions, or some orgs don't allow people to use certain instance types. So please try using a different instance type.
/cc @Jeffwan @mhausenblas
Actually, this is already covered by #792.
@errordeveloper i can confirm that instance type being used is indeed available in my region and my account. Still couldn't figure out the reason for this issue.
It is possible that instance type is available, but your organisation
disallows it.
On Mon, 10 Jun 2019, 8:19 am Utkarsh Sopan, notifications@github.com
wrote:
@errordeveloper https://github.com/errordeveloper i can confirm that
instance type being used is indeed available in my region and my account.
Still couldn't figure out the reason for this issue.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/weaveworks/eksctl/issues/834?email_source=notifications&email_token=AAB5MS7SSZL6JZPIBQAZG5DPZX6B3A5CNFSM4HSZY3K2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXJD53Y#issuecomment-500317935,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAB5MS7KZHOX2T272PDV7V3PZX6B3ANCNFSM4HSZY3KQ
.
I got the same message when trying to assign a custom role ARN to a nodegroup. Would you have an idea of the reason? Thanks.
Could you please provide more context? What config file have you used and
what commands you ran?
On Mon, 17 Jun 2019, 9:44 pm artemmikhalitsin, notifications@github.com
wrote:
I got the same message when trying to assign a custom role ARN to a
nodegroup. Would you have an idea of the reason? Thanks.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/weaveworks/eksctl/issues/834?email_source=notifications&email_token=AAB5MSYMV4XAHR5JXAJTMN3P27ZUHA5CNFSM4HSZY3K2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX4MUWY#issuecomment-502843995,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAB5MS3MPMZ3OLXID6X4DPDP27ZUHANCNFSM4HSZY3KQ
.
@errordeveloper
Sure.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: backstage-demo
region: us-east-1
availabilityZones: ["us-east-1a", "us-east-1d"]
nodeGroups:
- name: ng-1
desiredCapacity: 2
iam:
instanceRoleARN: "arn:aws:iam::<dev-aws-account>:role/EKSWorkerNode-BaseRole"
instanceProfileARN: "arn:aws:iam::<dev-aws-account>:instance-profile/EKSWorkerNode-BaseRole"
I ran
eksctl create nodegroup --config-file=cluster.yml --include=ng-1
Creating a nodegroup with eksctl create nodegroup creates the nodegroup fine
For additional context: Our organization uses a multi-account environment. To run this command I assume a role on the Dev AWS account. The ARNs specified above exist on this dev account as well.
@errordeveloper : Thanks a lot for mentioning about the issue with instance type here. I was trying to use t2.nano and it was throwing the same error as that of original poster's. Switching to t2.micro worked without any issue. The error message could have been little better i wish. :)
@GarbageYard the error is indeed very ambiguous.
@artemmikhalitsin If you haven't resolved the issue yet, does the error occur when you create this cluster in your dev account? Does it occur if you don't specify instanceRoleARN and instanceProfileARN?
@artemmikhalitsin I apologies for belated reply, it would be a good idea to open a new issue, as what you are seeing looks different from the more common case when that error message occurs (as what's @GarbageYard was seeing).
Hello Guys I solve this issue adding this permission:
"ec2:RunInstances"
"ec2:CreateLaunchTemplate"
"ec2:CreateLaunchTemplateVersion"
I'm using a t3.micro instances.
@wolfgangtz Thanks for your example. Below is mine - little bit tweaked and worked a charm for me. I'm putting it here as I maybe other people would search over web.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:RunInstances",
"ec2:CreateLaunchTemplate",
"ec2:CreateLaunchTemplateVersion",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DeleteLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions"
],
"Resource": "*"
}
]
}
We have AmazonEC2FullAccess assigned to our deployment role using eksctl. Specifying the instanceRoleARN and InstanceProfileARN we got the following error.
API: autoscaling:CreateAutoScalingGroup You are not authorized to use launch template: eksctl-blah-blah-blah
@davidmgrantham , have you tried my above policy?
@peix2, no but we are using the managed policy like a mentioned above, which contains this.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"autoscaling.amazonaws.com",
"ec2scheduled.amazonaws.com",
"elasticloadbalancing.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"transitgateway.amazonaws.com"
]
}
}
}
]
}
What I have is set described on eksctl.io webpage plus above example I shared as complementary. It was enough for me to get it working.
Do you mean this? https://eksctl.io/usage/minimum-iam-policies/
Are you attached custom nodegroup ARNS and Instance=profiles when deploying the cluster?
Can you share them?
We had the same issue and finally we end up with the following policies in the user that executes eksctl:
AWSCloudFormationFullAccess and AWSEC2FullAccess policies.{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*"
}
]
}
Most helpful comment
Actually, this is not an IAM issue. The error message says 'not authorized to _use_ launch template', it's about _using it_, not _creating_. You should see that launch template actually gets created. The issue is with launch template itself, and most of the time this comes down to instance types. Some regions/zones don't have t2 and t3 regions, or some orgs don't allow people to use certain instance types. So please try using a different instance type.
/cc @Jeffwan @mhausenblas