I have a tf file that provisions an aws instance and then uses a file provisioner to upload a file and then a remote-exec provisioner to perform a simple command. The file provisioner works. The remote-exec takes a couple minutes and then fails with an EOF error.
[ec2-user@ip-172-31-22-157 atomic]$ terraform apply
aws_instance.atomic-server: Creating...
ami: "" => "ami-35859254"
availability_zone: "" => "
ebs_block_device.#: "" => "
ephemeral_block_device.#: "" => "
instance_type: "" => "m3.medium"
key_name: "" => "devops1"
placement_group: "" => "
private_dns: "" => "
private_ip: "" => "
public_dns: "" => "
public_ip: "" => "
root_block_device.#: "" => "
security_groups.#: "" => "1"
security_groups.2585437983: "" => "wideopen"
source_dest_check: "" => "1"
subnet_id: "" => "
tags.#: "" => "1"
tags.Name: "" => "atomic-0"
tenancy: "" => "
vpc_security_group_ids.#: "" => "
aws_instance.atomic-server: Provisioning with 'file'...
aws_instance.atomic-server: Provisioning with 'file'...
aws_instance.atomic-server: Provisioning with 'remote-exec'...
aws_instance.atomic-server (remote-exec): Connecting to remote host via SSH...
aws_instance.atomic-server (remote-exec): Host: 52.10.122.243
aws_instance.atomic-server (remote-exec): User: fedora
aws_instance.atomic-server (remote-exec): Password: false
aws_instance.atomic-server (remote-exec): Private key: true
aws_instance.atomic-server (remote-exec): SSH Agent: false
aws_instance.atomic-server (remote-exec): Connected!
Error applying plan:
1 error(s) occurred:
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Below is the relevant portion of my .tf file
provisioner "file" {
source = "fedora_bashrc"
destination = "/home/fedora/fedora_bashrc"
}
provisioner "remote-exec" {
inline = [
"cat /home/fedora/fedora_bashrc >> /home/fedora/.bashrc",
]
}
It is important to note that the target machine is fedora atomic.
I see from journalctl that there is a /var/log/lastlog that does not exist in this atomic os.
Maybe this is a contributor to this bug?
Nov 23 18:09:58 ip-172-31-15-35.us-west-2.compute.internal sshd[1366]: pam_lastlog(sshd:session): unable to open /var/log/lastlog: No such file or directory
I am also getting an error during run commands on "remote-exec" in provision section.
Error:
aws_instance.test: Creating...
ami: "" => "ami-97d490fd"
availability_zone: "" => ""
ebs_block_device.#: "" => ""
ephemeral_block_device.#: "" => ""
instance_type: "" => "t1.micro"
key_name: "" => "newkvp"
placement_group: "" => ""
private_dns: "" => ""
private_ip: "" => ""
public_dns: "" => ""
public_ip: "" => ""
root_block_device.#: "" => ""
security_groups.#: "" => ""
source_dest_check: "" => "1"
subnet_id: "" => ""
tags.#: "" => "1"
tags.Name: "" => "test_server"
tenancy: "" => ""
vpc_security_group_ids.#: "" => "1"
vpc_security_group_ids.1458656584: "" => "sg-4d5b1f2a"
aws_instance.test: Provisioning with 'file'...
aws_instance.test: Provisioning with 'remote-exec'...
aws_instance.test (remote-exec): Connecting to remote host via SSH...
aws_instance.test (remote-exec): Host: 54.86.117.120
aws_instance.test (remote-exec): User: root
aws_instance.test (remote-exec): Password: false
aws_instance.test (remote-exec): Private key: true
aws_instance.test (remote-exec): SSH Agent: false
aws_instance.test (remote-exec): Connected!
Error applying plan:
1 error(s) occurred:
Failed to upload script: Error reading script: EOF
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Script :
provider "aws" {
access_key = "aws_access_key"
secret_key = "aws_secret_key"
region = "us-east-1"
}
resource "aws_instance" "test" {
ami = "ami-97d490fd"
instance_type = "t1.micro"
vpc_security_group_ids = ["sg-4d5b1f2a"]
key_name = "newkvp"
provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
connection {
agent = false
user = "root"
key_file = "newkvp.pem"
}
}
provisioner "remote-exec" {
inline = [
"echo 1"
]
connection {
agent = false
user = "root"
key_file = "newkvp.pem"
}
}
tags {
Name = "test_server"
}
}
Here, AMI I am using is developed by packer with centos6.5, and also tried this code with the different versions(v0.6.6, v0.6.4, v0.6.0) of terraform.
Please provide me any solution regarding this as I am unable to execute any single command to remote resource with terraform.
Hi,
i have seen this before too. My cause was, that i have to use "ec2-user" to connect to aws instance.
Possible you also have to use another user.
Looking at your output, i can see:
aws_instance.test (remote-exec): User: root
Adding:
connection {
user = "ec2-user"
}
solved my issue.
This error can be caused by any number of things, including a script_path that points to a directory and not a file. If you run into this error, try running:
env TF_LOG=TRACE terraform apply plan.tfplan
This will print out detailed data on what Terraform is actually trying to do, and which commands are failing. This may reveal the underlying problem with a particular plan.
I can reproduce this when provisioning to minimal linux hosts that have ssh available but do not have a scp server running. Do the inline items in the remote-exec block get munged into a local file and then uploaded to the remote host over scp? Or are they executed one at a time as separate ssh calls? Based on behavior, I'm guessing it's the first
@travis-bear Good debugging! We do upload a script over SCP. Would it be possible for you to put together some repro steps? That would go a long way towards helping us dig in here.
REPRO STEPS
resource "aws_instance" "example" {
ami = "ami-fd585ecd"
instance_type = "t2.micro"
subnet_id = "subnet-6f082118"
key_name = "terraformKey"
associate_public_ip_address = false
connection {
host = "${self.private_ip}"
user = "root"
private_key = "..."
}
provisioner "remote-exec" {
inline = [
"touch /tmp/abc",
"touch /tmp/xyz"
]
}
}
One possible fix would be to not copy the lines in the remote-exec block up to the server via scp, but rather append them via ssh. For example:
# copy the lines from the remote exec block up to the server
temp_file = /tmp/exec.sh
for line in remote_exec_lines:
ssh user@host "cat %s >> %s" %(line, temp_file)
# now run them
ssh user@host temp_file
I'm running into this same issue as well using the AWS Linux AMI.
Just now I quickly hacked together a fix that iterates over the inline array and executes each string as a command over SSH.
Let me know if that's an acceptable way to do this. If so, I might be able to write a proper fix somewhere next week.
Had the same issue. For me it was a broken "script_path" in "connection" (I assumed script_path to be directory).
aws_instance.test01 (remote-exec): Connected!
2016/10/27 21:56:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2016/10/27 21:56:52 Retryable error: Failed to upload script: scp: /home/centos: Is a directory
2016/10/27 21:56:54 [DEBUG] vertex root, waiting for: provider.aws (close)
2016/10/27 21:56:54 [DEBUG] vertex provisioner.remote-exec (close), waiting for: aws_instance.test01
2016/10/27 21:56:55 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2016/10/27 21:56:55 Retryable error: Failed to upload script: Error reading script: EOF
Hi folks! The provisioners have changed quite a bit since this issue was opened, so I am going to close this. If anyone is experiencing an issue with the remote-exec provisioner, please open a new github issue and fill out the issue template so we can look into it. Thank you!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Hi,
i have seen this before too. My cause was, that i have to use "ec2-user" to connect to aws instance.
Possible you also have to use another user.
Looking at your output, i can see:
aws_instance.test (remote-exec): User: root
Adding:
solved my issue.