Terraform v0.12.19
aws_iam_role_policy_attachment
resource "aws_iam_role" "ClusterAutoscalerRole" {
assume_role_policy = data.aws_iam_policy_document.ClusterAutoscalerRole_policy.json
name = "${var.eks_cluster_name}_ClusterAutoscalerRole"
}
data "aws_iam_policy_document" "ClusterAutoscalerRole_policy" {
statement {
actions = ["sts:AssumeRoleWithWebIdentity"]
effect = "Allow"
condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.eks_cluster.url, "https://", "")}:sub"
values = ["system:serviceaccount:kube-system:cluster-autoscaler"]
}
principals {
identifiers = ["${aws_iam_openid_connect_provider.eks_cluster.arn}"]
type = "Federated"
}
}
}
resource "aws_iam_role_policy_attachment" "ClusterAutoScaler_polattach" {
policy_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/${var.eks_cluster_name}_ClusterAutoscaler_policy"
role = aws_iam_role.ClusterAutoscalerRole.name
}
resource "aws_iam_policy" "ClusterAutoScaler_policy" {
name = "${var.eks_cluster_name}_ClusterAutoScaler_policy"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*"
}
]
}
POLICY
}
I am consistently getting the following error when attempting to attach a policy to a role. @camlow325 reported a similar issue in 10549, and mentioned this may be an eventual consistency issue. Is similar retry logic needed here?
Error: Provider produced inconsistent result after apply
When applying changes to
module.eks_control_plane.aws_iam_role_policy_attachment.ClusterAutoScaler_polattach,
provider "registry.terraform.io/-/aws" produced an unexpected new value for
was present, but now absent.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
I was expecting the policy to be attached without issue.
Running terraform apply produces this issue consistently.
Glad to provide additional information to help debug this issue
I experience this issue with terraform v0.12.21 and aws provider version 2.52.0.
The policy actually is attached to the role.
I experience this issue with terraform v0.12.21 and aws provider version 2.52.0.
The policy actually is attached to the role.
It appears that I had the issue because I used uppercase in the policy_arn, but in fact the policy name was all lowercase.
I got the same error when using an AWS managed policy, but with an ARN that contained the wrong partition. I applied the arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore managed AWS policy in GovCloud, but I should have applied arn:aws-us-gov:iam::aws:policy/AmazonSSMManagedInstanceCore.
Using the correct partition fixed the issue and did not result in this error. The fix may simply be a more specific error message.
AWS provider v2.58.0
(Copied here from #8751; this is a more relevant issue)
It took me a while to find a single character difference in a policy name, but it was it.
It seems like API IAM API treats managed policy names as case insensitive, while Terraform looks for a case-sensitive match. API docs don't mention case sensitivity at all. Either way it should be consistent at least with reality (a case insensitive match).
@bondsbw Thank you! I think a good resolution here would be to print a better error out when the ARN partition is incorrect. This seems like it would be a common issue.
Also, a useful trick:
data "aws_region" "current" {}
locals {
is_govcloud = length(regexall("us-gov-.*", data.aws_region.current.name)) > 0
arn_partition = local.is_govcloud ? "aws-us-gov" : "aws"
}
Then for an ARN: "arn:${local.arn_partition}:iam::aws:policy/..."
Most helpful comment
I got the same error when using an AWS managed policy, but with an ARN that contained the wrong partition. I applied the
arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCoremanaged AWS policy in GovCloud, but I should have appliedarn:aws-us-gov:iam::aws:policy/AmazonSSMManagedInstanceCore.Using the correct partition fixed the issue and did not result in this error. The fix may simply be a more specific error message.
AWS provider v2.58.0
(Copied here from #8751; this is a more relevant issue)