Terraform: AWS provider fails with InvalidClientTokenId

Created on 31 Mar 2016  ยท  10Comments  ยท  Source: hashicorp/terraform

https://gist.github.com/bitemyapp/f88fcdff2c1d701f6affdcd55c9691d5

Error refreshing state: 1 error(s) occurred:

* 1 error(s) occurred:

* InvalidClientTokenId: The security token included in the request is invalid.
    status code: 403, request id: caa59160-f75f-11e5-9241-eb1509222747

Following along with the simplest AWS example y'all had in the repository.

Makefile

# AWS_ACCESS_KEY_ID=123 AWS_SECRET_ACCESS_KEY=456
# AWS_ACCESS_KEY_ID=123 AWS_SECRET_ACCESS_KEY=456 terraform plan
# AWS_ACCESS_KEY_ID=123 AWS_SECRET_KEY=456 terraform plan
# AWS_ACCESS_KEY=123 AWS_SECRET_KEY=456 terraform plan -var-file="terraform.tfvars"

plan:
    terraform plan -var-file="terraform.tfvars"

variables.tf

variable "aws_region" {
  description = "AWS region to launch servers."
  default = "us-west-2"
}

# Ubuntu 14.04 LTS (x64)
variable "aws_amis" {
  default = {
    us-west-2 = "ami-8ba74eeb"
  }
}

variable "aws_pem_key_file_path" {
  description = "Path to the AWS pem key"
}

variable "aws_key_name" {
  description = "AWS pem key name"
}

variable "aws_access_key" {
  description = "Access key to provider (AWS, openstack, etc)"
}

variable "aws_secret_key" {
  description = "Secret key to provider (AWS, openstack, etc)"
}

main.tf

# Specify the provider and access details
provider "aws" {
  access_key = "${var.aws_access_key}"
  secret_key = "${var.aws_secret_key}"
  region = "${var.aws_region}"
}

# Create a VPC to launch our instances into
resource "aws_vpc" "default" {
  cidr_block = "10.0.0.0/16"
}

# Create an internet gateway to give our subnet access to the outside world
resource "aws_internet_gateway" "default" {
  vpc_id = "${aws_vpc.default.id}"
}

# Grant the VPC internet access on its main route table
resource "aws_route" "internet_access" {
  route_table_id         = "${aws_vpc.default.main_route_table_id}"
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = "${aws_internet_gateway.default.id}"
}

# Create a subnet to launch our instances into
resource "aws_subnet" "default" {
  vpc_id                  = "${aws_vpc.default.id}"
  cidr_block              = "10.0.1.0/24"
  map_public_ip_on_launch = true
}

# A security group for the ELB so it is accessible via the web
resource "aws_security_group" "elb" {
  name        = "terraform_example_elb"
  description = "Used in the terraform"
  vpc_id      = "${aws_vpc.default.id}"

  # HTTP access from anywhere
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# Our default security group to access
# the instances over SSH and HTTP
resource "aws_security_group" "default" {
  name        = "terraform_example"
  description = "Used in the terraform"
  vpc_id      = "${aws_vpc.default.id}"

  # SSH access from anywhere
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # HTTP access from the VPC
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["10.0.0.0/16"]
  }

  # outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}


resource "aws_elb" "web" {
  name = "terraform-example-elb"

  subnets         = ["${aws_subnet.default.id}"]
  security_groups = ["${aws_security_group.elb.id}"]
  instances       = ["${aws_instance.web.id}"]

  listener {
    instance_port     = 80
    instance_protocol = "http"
    lb_port           = 80
    lb_protocol       = "http"
  }

}

resource "aws_key_pair" "auth" {
  key_name   = "${var.aws_key_name}"
  public_key = "${file(var.aws_pem_key_file_path)}"
}

resource "aws_instance" "web" {
  # The connection block tells our provisioner how to
  # communicate with the resource (instance)
  connection {
    # The default username for our AMI
    user = "ubuntu"

    # The connection will use the local SSH agent for authentication.
  }

  instance_type = "m1.small"

  # Lookup the correct AMI based on the region
  # we specified
  ami = "${lookup(var.aws_amis, var.aws_region)}"

  # The name of our SSH keypair we created above.
  key_name = "${aws_key_pair.auth.id}"

  # Our Security group to allow HTTP and SSH access
  vpc_security_group_ids = ["${aws_security_group.default.id}"]

  # We're going to launch into the same subnet as our ELB. In a production
  # environment it's more common to have a separate private subnet for
  # backend instances.
  subnet_id = "${aws_subnet.default.id}"

  # We run a remote provisioner on the instance after creating it.
  # In this case, we just install nginx and start it. By default,
  # this should be on port 80
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get -y update",
      "sudo apt-get -y install nginx",
      "sudo service nginx start"
    ]
  }
}
$ terraform -v
Terraform v0.6.14

What am I doing wrong?

bug provideaws

Most helpful comment

I'm running Terraform v0.6.15. I have exported the following keys following an STS call for credentials:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN (Can use AWS_SECURITY_TOKEN as well)

the aws-cli works, but Terraform complains with the following:

Refreshing Terraform state prior to plan...

Error refreshing state: 1 error(s) occurred:

  • 1 error(s) occurred:
  • InvalidClientTokenId: The security token included in the request is invalid
    status code: 403, request id:

No matter what I do... I've tried AWS_TOKEN as the key as well, Terraform fails to use temporary credentials. If I use a Permanent Access Key and Secret Access Key it works. I thought this issue had been solved.

All 10 comments

TF_VAR_aws_access_key=123 TF_VAR_aws_secret_key=456 terraform plan -var-file="terraform.tfvars" didn't work with real or fake data.

vars assigned in terraform.tfvars:

aws_pem_key_file_path
aws_access_key
aws_secret_key
aws_key_name

It's a long term (AKIA) access key to my understanding. No session token needed _I think_.

I mixed up my access and secret keys from two different pairs. do'h.

Thanks for following up :smile: , happy Terraforming!

@catsby reopen for nicer error message? I'll include other info.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "*",
      "Resource": "*"
    }
  ]
}

My perms came from being part of an IAM group that is attached to the above policy for a role.

I'm running Terraform v0.6.15. I have exported the following keys following an STS call for credentials:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN (Can use AWS_SECURITY_TOKEN as well)

the aws-cli works, but Terraform complains with the following:

Refreshing Terraform state prior to plan...

Error refreshing state: 1 error(s) occurred:

  • 1 error(s) occurred:
  • InvalidClientTokenId: The security token included in the request is invalid
    status code: 403, request id:

No matter what I do... I've tried AWS_TOKEN as the key as well, Terraform fails to use temporary credentials. If I use a Permanent Access Key and Secret Access Key it works. I thought this issue had been solved.

I am also getting this on both Terraform v0.7.11 and v0.7.13.

Same for me as well. I run it from Jenkins, so dynamic key is needed

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jrnt30 picture jrnt30  ยท  3Comments

rkulagowski picture rkulagowski  ยท  3Comments

rjinski picture rjinski  ยท  3Comments

c4milo picture c4milo  ยท  3Comments

thebenwaters picture thebenwaters  ยท  3Comments