We are trying to implement our infrastructure as a module for each service. For each service we want to create security groups, elbs, asgs and IAM roles to run the instances as.
Then we attach policies to the created roles. We have two services, each with their own role, but both roles should be able to access the same S3 bucket, hence, should have the same policy attached, i.e.:
module A:
resource "aws_iam_policy_attachment" "a-policy" {
name = "a-policy"
policy_arn = "arn:aws:iam::aws:policy/S3Policy"
roles = ["${aws_iam_role.a-role.name}"]
}
module B:
resource "aws_iam_policy_attachment" "b-policy" {
name = "b-policy"
policy_arn = "arn:aws:iam::aws:policy/S3Policy"
roles = ["${aws_iam_role.b-role.name}"]
}
Which seems to work. But I couldn't help notice the warning:
"NOTE: The aws_iam_policy_attachment resource is only meant to be used once for each managed policy. All of the users/roles/groups that a single policy is being attached to should be declared by a single aws_iam_policy_attachment resource."
Are we going to run into issue later, or is this warning not to worry about?
_2015/12/4 @catsby updated formatting_
Ok, after further testing it seems that the initial create is fine, but on subsequent applies the policy is only assigned to one of the roles. Is there some work around? Extracting all of the policy assignment into a single modules is not great, since not all services (i.e. roles) will be created in all environments.
The way it's implemented, it really needs to be in only one place.
Within the code, it calls ListEntitiesForPolicy, which returns all the things AWS knows have that policy attached. If that differs from what Terraform thinks should be attached, it generates a changelist and updates the policy attachments.
That's why having two aws_iam_policy_attachment resources for the same policy is a bad thing, the "correct" state is hard to determine. Even the union of those two things may not be correct, as another object unreferenced by Terraform might have the policy attached.
In your example of a-policy and b-policy, it's effectively creating a race condition on which state Terraform is going to consider correct.
I'm fighting this same sort of thing, where I may want roles assigned to instance-profiles in different applications, but the single point of definition makes that difficult (or near-impossible in cross-team cases).
It would be great if the policy attachments were more free-form, but it also makes the code to check correctness much much harder. Hopefully there's a rethink on the horizon, but that would be a pretty breaking change.
Yes, I assumed that was what was going on. For our purposes, it would be better if there was a aws_iam_role_policy_attachement which could use ListAttachedRolePolicies in the background. Of course this would conflict with aws_iam_policy_attachment and would require a aws_iam_group_policy_attachment and aws_iam_user_policy_attachment.
I'll have a look at an implementation.
Running into this issue as well. I'd consider it a bug. We have multiple terraform templates for different services, which coexist in one AWS account. Since the two 'aws_iam_policy_attachment' objects were written in different terraform templates, they didn't know about each other, and I didn't get that warning.
I was pretty confused when I ran terraform apply for service A, only to find that it caused an outage in service B.
In the message on the PR, I wrote the new use cases and I tested it to make sure that it actually alleviated the problem that people have been having. If the change looks like it's going to be accepted, I'll take the note out of the documentation too.
I refactored my TF modules a bit, and this ended up being a non-issue for me. I had wanted to set the interface between security and dev at the role level, with dev teams able to assemble instance roles out of policies. Then I thought about it and realized that was never going to happen without a lot of back-and-forth, so I just set the interface at "security defines roles and instance profiles, dev uses them from what's available".
So now I set attachments in one place. However, thats purely our own workflow, and if Terraform changes upstream we'll adapt. Just mentioning how we worked around it, since I see the appeal of the existing system from an easy consistency basis. One API call gives all the attachments that _do_ exist, and then Terraform can make that match what _should_ exist. There's a lot of API call chasing otherwise.
I've come up against this use case too. I want to define a set of default policies within a standard VPC modules that I can attach to a number of different roles created both inside and outside the module. There doesn't seem to be a way to do this given the limitation of aws_iam_policy_attachment as it currently stands, preventing good reusability of managed policies.
aws_iam_policy_attachment resources.It still needs to be looked at by a merge master.
I used the policy attachment for a policy that was created and managed OUTSIDE of terraform, and it successfully attached a terraform role to the specified policy, but when I destroyed the attachment, it cleared ALL attachments from the original policy (including all the ones that were set outside of terraform!)
Why does it do that on destroy? What the!?
This was a huge surprise for us too. Hand-made roles that used AWS managed policies (AmazonSQSFullAccess, AmazonS3FullAccess) were having those policies detached when the attachment resources were destroyed and causing outages in our platform.
A solution for being able to attach a policy to a role would be great. We're slowly transitioning to using Terraform for more and more things, but in the interim we can't really use Terraform to manage IAM policy attachments because it will remove access to things Terraform does not yet know about.
It's really too bad because Terraform managing IAM is WAY more compelling than the console/api.
I have this issue too.
We have multiple environments (prod, stage, etc), and they all use the same terraforms modules but with slightly different tfvars file (and different state files).
It was a big surprise when I tried to update our infrastructure and found each AWS managed policy (eg: AWSLambdaFullAccess) wanted to disable all other uses besides the env I was currently applying to.
Example:
~ module.rds_iam.aws_iam_policy_attachment.policy_attachment_rds_enhanced_monitoring
roles.3807323091: "prod-role-rds-monitoring" => ""
roles.4128253908: "" => "stage-role-rds-monitoring"
I'm still not sure how to workaround this issue in our environment, because these environments don't know about each other and shouldn't...
+1
Running the following command: terraform plan -state=terraform-dev.tfstate -target=module.dev
with a separate state file even, still gives the following output:
~ module.dev.aws_iam_policy_attachment.ec2_policy_attach
roles.#: "4" => "1"
roles.XXX: "aws-staging-ec2-role" => ""
roles.YYY: "aws-dev-ec2-role" => "aws-dev-ec2-role"
roles.ZZZ: "aws-ci-ec2-role" => ""
Which is kind of weird, we have to manually re-attach the policy every time we make an infrastructural change.
Safe to close this now? Being that https://github.com/hashicorp/terraform/pull/6858 was merged and released in 0.7.0
I just upgraded to 0.7 just to fix that issue and it is not resolved. My policy attachment still flip flops between two roles (dev and qa).
roles.4180610926: "qa-ecs-role" => ""
roles.427033154: "" => "dev-ecs-role"
The aws_iam_policy_attachment resource didn't change in 0.7. Instead, resources were added to provide specific policy attachments, eg aws_iam_role_policy_attachment. It seems unclear to me whether Hashicorp wants to resolve this issue, in light of the new resources.
Hey Friends - as @MikeSchuette pointed out, we have new resources to address this issue, so I am going to close this. Thanks!
@catsby Should the docs for aws_iam_role_policy_attachment use role.id instead of role.name? role.id worked for me in terraform 0.8.4.
Docs say this:
resource "aws_iam_role_policy_attachment" "test-attach" {
role = "${aws_iam_role.role.name}"
policy_arn = "${aws_iam_policy.policy.arn}"
}
I did this:
resource "aws_iam_role_policy_attachment" "test-attach" {
role = "${aws_iam_role.role.id}"
policy_arn = "${aws_iam_policy.policy.arn}"
}
I am seeing a related issue: when specifying aws_iam_policy_attachment in multiple places, tf plan always flags the attachment as needing to update. In the output:
~ aws_iam_policy_attachment.r2-attach
roles.#: "3" => "1"
roles.x: "r1" => ""
roles.y: "r2 => "r2"
roles.z: "r3" => ""
~ aws_iam_policy_attachment.r3-attach
roles.#: "3" => "1"
roles.x: "r1" => ""
roles.y: "r2" => ""
roles.z: "r3" => "r3"
~ aws_iam_policy_attachment.r1-attach
roles.#: "3" => "1"
roles.x: "r1" => "r1"
roles.y: "r2" => ""
roles.z: "r3 => ""
Is this a bug? Please advise.
I do think that specifying this in multiple places is reasonable. I understand that you can give aws_iam_policy_attachment an array, but it makes more sense to group these statements with the roles they represent rather than together.
I am seeing the same thing as @boompig and I can confirm that in 0.9.7 this still doesn't work well with aws_iam_policy_attachment. Whenever you try to attach_policies to role A, they are removed from role B and vice versa. It's not unreasonable by any means that someone would want to have two roles with separate names that include separate policy attachments (also with different names). Further more if you use --target to exclude policy Bs namespace from policy A's apply / plan, it still seems that some how terraform is including it and modifying it's policy state when it shouldn't.
@hardboiled I tried your suggestion on 0.9.7 with no luck. Can you confirm that this approach is still working for you in 0.9.x?
@slajax Haven't been able to try it, but when I am debugging these issues, I usually look in the tfstate file to see what fields are being generated for the resource I'm referencing. So for example, in this case you might search through the tfstate file for the specific role you're trying to reference, then check what field within the role is being populated with an ARN.
@hardboiled thanks! I managed to solve it by using 'aws_iam_role_policy_attachment' and it worked properly for me.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
This was a huge surprise for us too. Hand-made roles that used AWS managed policies (AmazonSQSFullAccess, AmazonS3FullAccess) were having those policies detached when the attachment resources were destroyed and causing outages in our platform.