What help do you need?
When use managedNodeGroups in cluster config, for example:
managedNodeGroups:
- name: managed-ng-gitops
labels: { role: managed-ng-gitops }
instanceType: c5.xlarge
desiredCapacity: 2
ssh:
allow: true
maxSize: 6
volumeSize: 60
iam:
withAddonPolicies:
externalDNS: true
certManager: true
imageBuilder: true
ebs: true
fsx: true
efs: true
appMesh: true
xRay: true
autoScaler: true
albIngress: true
cloudWatch: true
It causes failure when using eksctl(0.10.1) create cluster, see following error messages:
[โ] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-eksctl-gitops-managed-nodegroup-managed-ng-gitops"
[โน] fetching stack events in attempt to troubleshoot the root cause of the failure
[โ] AWS::IAM::Policy/PolicyCertManagerHostedZones: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyCertManagerChangeSet: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyEFS: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyCertManagerGetChange: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyServiceLinkRole: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyFSX: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyEFSEC2: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyEBS: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyXRay: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyAppMesh: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyALBIngress: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyAutoScaling: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::EKS::Nodegroup/ManagedNodeGroup: CREATE_FAILED โย "The provided role doesn't have the Amazon EKS Managed Policies associated with it. Please ensure the following policies [arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy, arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy, arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly] are attached (Service: AmazonEKS; Status Code: 400; Error Code: InvalidParameterException; Request ID: 9c222df4-33c0-47e5-9453-8d9ba09a5877)"
It's kind of confusion which property should be worked and can't find it.
I'm also facing the same issue:
managedNodeGroups:
- name: ng-1
instanceType: t3.large
desiredCapacity: 3
minSize: 1
maxSize: 5
availabilityZones:
- ap-northeast-1a
- ap-northeast-1c
ssh:
allow: true
publicKeyPath: ~/.ssh/id_rsa.pub
iam:
attachPolicyARNs:
# EKS worker nodes require the following two policies at least
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
# Attaching custom policies
- arn:aws:iam::aws:policy/AmazonSQSFullAccess
- arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
- arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess
- arn:aws:iam::<Account ID>:policy/KinesisFirehosePutRecords
- arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
withAddonPolicies:
imageBuilder: true
autoScaler: true
ebs: true
[โ] AWS::IAM::Policy/PolicyEBS: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::IAM::Policy/PolicyAutoScaling: CREATE_FAILED โย "Resource creation cancelled"
[โ] AWS::EKS::Nodegroup/ManagedNodeGroup: CREATE_FAILED โย "The provided role doesn't have the Amazon EKS Managed Policies associated with it. Please ensure the following policies [arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy, arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy, arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly] are attached (Service: AmazonEKS; Status Code: 400; Error Code: InvalidParameterException; Request ID: 87023c5e-908b-441d-98fc-70cb60c1b87e)"
I found that arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly is required for minimum IAM policy on managed worker nodes, so the following configuration works fine for me:
managedNodeGroups:
- name: ng-1
instanceType: t3.large
desiredCapacity: 3
minSize: 1
maxSize: 5
availabilityZones:
- ap-northeast-1a
- ap-northeast-1c
ssh:
allow: true
publicKeyPath: ~/.ssh/id_rsa.pub
iam:
attachPolicyARNs:
# EKS worker nodes require the following two policies at least
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
# Attaching custom policies
- arn:aws:iam::aws:policy/AmazonSQSFullAccess
- arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
- arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess
- arn:aws:iam::<Account ID>:policy/KinesisFirehosePutRecords
- arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
withAddonPolicies:
imageBuilder: true
autoScaler: true
ebs: true
It seems that we need the following kind of patch to address this issue when using only withAddonPolicies block like @cc4i, but I'm not sure this is the best solution considered by eksctl team because this role contains the duplicate policies for operating ECR resources:
diff --git a/pkg/cfn/builder/iam_helper.go b/pkg/cfn/builder/iam_helper.go
index f5be3996..4edf7395 100644
--- a/pkg/cfn/builder/iam_helper.go
+++ b/pkg/cfn/builder/iam_helper.go
@@ -16,12 +16,11 @@ func createRole(cfnTemplate cfnTemplate, iamConfig *api.NodeGroupIAM) {
attachPolicyARNs := iamConfig.AttachPolicyARNs
if len(attachPolicyARNs) == 0 {
attachPolicyARNs = iamDefaultNodePolicyARNs
+ attachPolicyARNs = append(attachPolicyARNs, iamPolicyAmazonEC2ContainerRegistryReadOnlyARN)
}
if api.IsEnabled(iamConfig.WithAddonPolicies.ImageBuilder) {
attachPolicyARNs = append(attachPolicyARNs, iamPolicyAmazonEC2ContainerRegistryPowerUserARN)
- } else {
- attachPolicyARNs = append(attachPolicyARNs, iamPolicyAmazonEC2ContainerRegistryReadOnlyARN)
}
if api.IsEnabled(iamConfig.WithAddonPolicies.CloudWatch) {
Otherwise, I think you can explicitly define attachPolicyARNs block as follows to mitigate this issue:
managedNodeGroups:
- name: managed-ng-gitops
instanceType: c5.xlarge
desiredCapacity: 2
maxSize: 6
volumeSize: 60
ssh:
allow: true
iam:
attachPolicyARNs:
# EKS worker nodes require the following two policies at least
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
withAddonPolicies:
externalDNS: true
certManager: true
imageBuilder: true
ebs: true
fsx: true
efs: true
appMesh: true
xRay: true
autoScaler: true
albIngress: true
cloudWatch: true
...
One level up, on the EKS Cluster, requires that the policy/AmazonEKSClusterPolicy is attached to the role you setup the EKS Cluster with.
The issue I'm trying to solve is that I am not allowed to have EKS create/manage to create instances/ELBs in certain security groups (keep it to private subnets). I created my own policy and tried that but it also fails as the cluster NEEDS this AWS-managed policy attached.
I _think_ the work around is going to be attaching explicit deny policy to counteract the policies with the AmazonEKSClusterPolicy. Though, it would be nice to maintain and create my own.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Most helpful comment
I found that
arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnlyis required for minimum IAM policy on managed worker nodes, so the following configuration works fine for me: