Terraform v0.6.16
Please list the resources as a list, for example:
resource "aws_launch_configuration" "sandbox" {
image_id = "${lookup(var.amis, var.region)}"
instance_type = "m4.4xlarge"
iam_instance_profile = "${aws_iam_instance_profile.sandbox.id}"
key_name = "${lookup(var.aws_key_name, var.region)}"
security_groups = ["${aws_security_group.sandbox.id}"]
user_data = "${template_file.sandbox.rendered}"
root_block_device {
volume_type = "standard"
volume_size = 40
delete_on_termination = true
}
lifecycle {
create_before_destroy = true
}
}
The launch configuration should be created without error
About 10% of the time the following error takes place:
aws_launch_configuration.sandbox: Error creating launch configuration: ValidationError: You are not authorized to perform this operation.
terraform applyUsing non-nested modules
Hi @toddrosner! Unfortunately this error is coming from the AWS API (and being passed on) rather than from Terraform. I found some other reports of it on the AWS forums (here for example), but no real solution. It seems that this is an issue for Launch configurations only.
Perhaps this is something that we can work around with retries if it continues to be an issue. Since it is a legitimate error message for those without correct permissions, I'd rather not add retries based on it for now as it will delay surfacing the error to those for whom it is a genuine error. Are you able to escalate this with your AWS support in order to investigate why it is happening in your account?
I'll go ahead and close this for now - if further information comes to light, please feel free to reopen!
Thanks @jen20. I had a feeling this was more AWS centric than Terraform, just based on the frequency of the error.
I'll try to communicate this to someone over at AWS to see if there's any other answers.
I'm seeing the same issue too. If I re-run terraform apply, the issue goes away. It started fairly recently, but nothing about my IAM permissions has changed in a long time.
I've also started getting this intermittently. If AWS is returning this odd error, maybe you could use a heuristic to identify it's yet another AWS eventual consistency error, not a permissions issue.
same here
I'm also seeing this and on the second run of terraform apply the resource is created. Wondering if it'd due to a dependent resource not having been created the first time round.
In order to solve this problem, you'd need to include the IAM policy applied to the IAM user attempting to create this instance.
That said, I ran into the same problem, and discovered that I needed to permit a few extra IAM actions.
{
"Sid": "NonResourceBasedTerraformRequiredPermissions",
"Action": [
"ec2:MonitorInstances",
"ec2:UnmonitorInstances",
"ec2:ModifyInstanceAttribute"
],
"Effect": "Allow",
"Resource": "*"
}
Unfortunately, these actions are not resource based and do not support conditionals. So... Yay.
The full list of required actions is as follows: DescribeImages DescribeInstances DescribeVolumes ModifyInstanceAttribute MonitorInstances RunInstances TerminateInstances UnmonitorInstances
Thanks @cbarbour
When you updated your IAM user policy to solve this, were you experiencing the problem 100% of the time, or only about 10% of the time as indicated in my opening comment?
@toddrosner
I'd see the error only on instance creation. Though terraform reports an instance creation failure, the instance is started, and subsequent runs succeed.
In my case, it was because I needed to add the monitoring and ModifyInstanceAttributes actions to the IAM user. Terraform uses these _after_ the instance is created, which is why things seem to work partially even though errors are thrown.
The IAM user which provides credentials for execution of terraform has AdministratorAccess (essentially, blanket permissions) with a Trust Relationship on ec2.amazonaws.com and I still see this issue.
@mrwilby I recommend you enable cloudtrace. Reproduce the error with cloudtrace enabled, and check your cloud trace logs to see what request caused the failure. Remember to check logs from 'us-east-1' as well as your target region; a lot of API actions hit 'us-east-1'
@cbarbour I think you mean CloudTrail, and I have not yet enabled that while using Terraform, but it might provide some valuable insight to this issue.
@toddrosner Yes, CloudTrail is what I meant. I found it incredibly valuable, if a bit verbose. Unfortunately it captures a lot of console activity as well. It helps to use it on a fairly quiet account.
One trick I found valuable was to look specifically for access denied error messages. In my case, the rest of the messages were not particularly useful. curl | unzip | python -mjson.tool helps as well.
We are also running into this problem. We could work around this by adding a local-exec with a sleep to the aws_iam_instance_profile resource, but this doesn't look very nice to me.
What are the odds for retrying it on a failure, like in #2037? If a Terraform user makes a mistake and really didn't set the correct permissions, the error message would just appear some seconds later, right?
FTR: This template causes the error after 1-5 tries. It works, when uncommenting the local-exec part.
provider "aws" {
region = "eu-west-1"
}
resource "aws_iam_role" "test" {
name = "node-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_instance_profile" "test" {
name = "my-profile"
roles = ["${aws_iam_role.test.id}"]
#provisioner "local-exec" {
# command = "sleep 10"
#}
}
resource "aws_launch_configuration" "sandbox" {
image_id = "ami-814031f2"
instance_type = "t2.nano"
iam_instance_profile = "${aws_iam_instance_profile.test.id}"
root_block_device {
volume_type = "standard"
volume_size = 40
delete_on_termination = true
}
lifecycle {
create_before_destroy = true
}
}
@svenwltr: Is it possible something else is biting you? It's possible to see permission denied errors even when running with root permissions. E.g. if the AMI is owned by a different account.
@cbarbour No, I don't think so, since it then would fail every time, but it only appears some times with the same code.
I used this script to reproduce the error:
#!/bin/bash
cd $( dirname $0 )
set -ex
terraform version
while true
do
terraform apply
terraform destroy -force
done
I proposed a change for the AWS provider (#8813) and the script now runs about a hour without any error.
Hello All and thanks for this post.
I was also experiencing this same problem and can confirm it was intermittent for me too. I was working with https://github.com/terradatum/tack and was able to deploy it fine without any issues--sometimes. But then when I would tear it down and re-deploy from scratch, I intermittently (and sometimes constantly--for hours at a time) got the same permissions error.
My solution below on MacOS / OS X or whatever they're calling it today:
Removed the aws cli version provided from brew and installed the aws cli via python with pip per AWS docs/method as noted below. I initially tried to upgrade aws cli with brew but was on latest at the time (1.10.63) and that version had the issues for me.
Remove Brew provided aws cli
$ brew remove --force awscli
Uninstalling awscli... (14,403 files, 112.3M)
Install aws cli NOT using brew
RE: http://docs.aws.amazon.com/cli/latest/userguide/installing.html
$ sudo pip install awscli --ignore-installed six
Note the upgraded aws cli version installed via python pip is newer at version 1.10.65 for me at the time.
aws --version
aws-cli/1.10.65 Python/2.7.11 Darwin/15.6.0 botocore/1.4.55
Summary: I'm now able to deploy again via Terraform without the AWS permissions errors, and have validated full deployments several times. Before doing the above I was not able to do any deployments for the last two hours. Hopefully it lasts. Thanks All for your notes above.
@cmcconnell1
Upgrading aws-cli should not have solved your problem. Terraform uses the aws-go-sdk. As far as I know, it never uses the cli tool, nor does it use botocore or any of the other Python libraries.
Interesting indeed @cbarbour thanks for the follow-up. I'm not sure about the internal dependencies, etc. so could not comment on that. However, my procedure noted above did resolve our teams issues (disclaimer: this was about a month ago and with the noted versions, etc.).
I ran into a situation which closely matches this report, but was able to resolve it. Sharing in case it helps.
aws_iam_instance_profile resource, the role has to be specified with a .name, not with a .role_id or .idec2:PassRole permission in the policy for the Terraform-invoking user.Fixing both of those and I have reliably-passing Terraform (I think). I am using a data provider to grab the IAM role, rather than creating it with the TF user.
In case other people run into this problem and need something else to try. Somehow I got into a state where terraform had create the instance profile but had not attached any roles to it.
$ aws iam list-instance-profiles
{
"InstanceProfiles": [
{
"InstanceProfileId": "AIPAJY5KUWMDQFRVO3G3K",
"Roles": [],
"Path": "/",
"InstanceProfileName": "sa-demo-ecs-dev-instance-profile",
"Arn": "arn:aws:iam::461485115270:instance-profile/sa-demo-ecs-dev-instance-profile",
"CreateDate": "2017-08-30T21:46:12Z"
}
]
}
As you can see the Roles array is empty. terraform apply wasn't fixing the situation (maybe this is a bug?) but as soon as I deleted it:
$ aws iam delete-instance-profile --instance-profile-name sa-demo-ecs-dev-instance-profile
And then re-did the terraform apply it recreated the instance profile and attached the role.
@ctindel Thanks a lot for that. That was my exact problem. An Instance profile with no roles attached. Incase it helps you or someone else, how I got into that state was by accidentally deleting the IAM role. This however left the Instance Profile behind as I create that in a different terraform state. I don't know if this is a bug per se. Its definitely a gotcha
I just ran into this problem today using v0.10.7.
My workaround was to destroy everything and then re-appy to get everything to work again.
Why was this ticket closed?
@mingfang As of Terraform v0.10.0, each of the provider plugins that were previously part of Terraform Core were split into their own repositories under the Terraform-Providers Organization.
This issue now belongs to terraform-providers/terraform-provider-aws. I would check there to see if there is an existing related issue. It looks like this issue was closed before the provider split because the maintainer who investigated it couldn't reproduce the issue. Had it been open, @hashibot would have automatically migrated it there.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
In case other people run into this problem and need something else to try. Somehow I got into a state where terraform had create the instance profile but had not attached any roles to it.
As you can see the Roles array is empty. terraform apply wasn't fixing the situation (maybe this is a bug?) but as soon as I deleted it:
And then re-did the terraform apply it recreated the instance profile and attached the role.