Terraform-provider-aws: remote-exec does not work with two tier example

Created on 24 Aug 2017  路  4Comments  路  Source: hashicorp/terraform-provider-aws

Hi I have tried the gitter and irc channels but not to avail. It looks like there is nobody there
Based on https://github.com/terraform-providers/terraform-provider-aws/blob/4f9196eaf93734078a986decf859f8f995c25383/examples/two-tier/main.tf#L122

Terraform Version

terraform -v
Terraform v0.10.2

Affected Resource(s)

ec2 instance

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

provider "aws" {
  access_key = "${var.aws_access_key}"
  secret_key = "${var.aws_secret}"
  region     = "${var.region}"
}

resource "aws_key_pair" "auth" {
  key_name   = "${var.key_name}"
  public_key = "${file(var.public_key_path)}"
}

resource "aws_instance" "pivot_gocd_agent" {
  ami           = "ami-cb4b94dd"
  instance_type = "t2.medium"

  # The name of our SSH keypair we created above.
  key_name = "${aws_key_pair.auth.id}"

  connection {
    # The default username for our AMI
    user = "root"

    # The connection will use the local SSH agent for authentication.
  }

  provisioner "remote-exec" {
    scripts = [
      "./bin/provision.sh",
      "./bin/start.sh"
    ]
  }

}

Debug Output

...
aws_instance.pivot_gocd_agent (remote-exec): Connecting to remote host via SSH...
aws_instance.pivot_gocd_agent (remote-exec):   Host: 54.159.165.XXX
aws_instance.pivot_gocd_agent (remote-exec):   User: root
aws_instance.pivot_gocd_agent (remote-exec):   Password: false
aws_instance.pivot_gocd_agent (remote-exec):   Private key: false
aws_instance.pivot_gocd_agent (remote-exec):   SSH Agent: true
aws_instance.pivot_gocd_agent: Still creating... (5m10s elapsed)
aws_instance.pivot_gocd_agent: Still creating... (5m20s elapsed)
aws_instance.pivot_gocd_agent: Still creating... (5m30s elapsed)
Error applying plan:

1 error(s) occurred:

* aws_instance.pivot_gocd_agent: 1 error(s) occurred:

* timeout

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Panic Output

not generated

Expected Behavior

remote-exec scripts should have run

Actual Behavior

ssh timeout

Steps to Reproduce

terraform plan && terraform apply

Important Factoids

EC2 classic

bug servicec2 upstream-terraform

Most helpful comment

Looks like the two-tier example needs to be fixed. The workaround is simple. Add the private key path (.pem file) of your aws account to the "connection" block ( as shown below ) in "main.tf".
NOTE: In my case I also added a variable called "private_key_path" in the variables.tf file, which I can supply from the command line when invoking terraform. This worked for me.

In main.tf
connection {
# The default username for our AMI
user = "ubuntu"

# The connection will use the local SSH agent for authentication.
private_key = "${file(var.private_key_path)}"

}

In variables.tf:

variable "private_key_path" {
description = "Path to the private key - for ssh login. Example: ~/.ssh/terraform.pem"
}

All 4 comments

Seeing the same issue. Strange thing is if I take a key I generated up on aws console, then exctract the public key to a file, then point terraform at that public key the script applies successfully but when I'm done I can't use that same key to ssh to the instance. But if I generate a key myself, terraform hangs with ssh timeout just as you described. But while it's retrying i can 'ssh [email protected] -i mykey.pem' and i get in to the box that terraform can't connect to.

But I also have a newbie question: the script prompts me for a path to the public key. It uses that to create the keypair up in aws. All good. But when it goes to run the remote-exec, which private key is it going to use? Does it assume the private key is in the same path as the public key and named the same? I've played with this a lot and can't get anything to make sense. The provisioner docs were no help, they only provided simple documentation on the params but no real information on how the provisioner authenticates.

Looks like the two-tier example needs to be fixed. The workaround is simple. Add the private key path (.pem file) of your aws account to the "connection" block ( as shown below ) in "main.tf".
NOTE: In my case I also added a variable called "private_key_path" in the variables.tf file, which I can supply from the command line when invoking terraform. This worked for me.

In main.tf
connection {
# The default username for our AMI
user = "ubuntu"

# The connection will use the local SSH agent for authentication.
private_key = "${file(var.private_key_path)}"

}

In variables.tf:

variable "private_key_path" {
description = "Path to the private key - for ssh login. Example: ~/.ssh/terraform.pem"
}

I believe the two tier example works fine, but could possibly do with a more verbose comment in the /examples/index page.
For me I didn't have my private key added to my local ssh-agent (ssh-add ~/.ssh/id_rsa)

Making this work on terraform cloud took some tweaks as well

https://github.com/joemsak/terraform-aws-2-tier-boilerplate

for absolute beginners to all of this, it also helps to know the process for working with the key pair:

from what I could tell I had to create my key pair manually in AWS console first

AWS console -> EC2 -> Key Pair -> Create

then I downloaded the .pem and through much trial and error found out I had to generate the public key from that file:

ssh-keygen -y -f /path/to/your.pem

the private key I have as a sensitive variable for my workspace, as seen in the variables.tf file

-edit- to be completely safe I suppose the public key could be a sensitive variable as well but I'll invalidate this one as it's just for testing

Was this page helpful?
0 / 5 - 0 ratings