Terraform: remote-exec provisioner fails with 'bash: Permission denied'

Created on 1 Mar 2016  路  15Comments  路  Source: hashicorp/terraform

I'm trying to provision a RHEL 7 EC2 instance on AWS and kick off a script using the remote-exec provisioner.

After ssh successfully connects, remote-exec fails with a 'Bash: Permission denied' error as shown in the following log extract:

aws_instance.jenkins_slave (remote-exec):   Host: ***.***.***.***
aws_instance.jenkins_slave (remote-exec):   User: ec2-user
aws_instance.jenkins_slave (remote-exec):   Password: false
aws_instance.jenkins_slave (remote-exec):   Private key: true
aws_instance.jenkins_slave (remote-exec):   SSH Agent: false
鈫怺0m鈫怺0maws_instance.jenkins_slave (remote-exec): Connected!
鈫怺0m鈫怺0maws_instance.jenkins_slave (remote-exec): bash: /tmp/terraform_1298498081.sh: Permission denied
鈫怺31mError applying plan:

1 error(s) occurred:

* Script exited with non-zero exit status: 126

The sample remote-exec I'm using for debugging is very simple:

    provisioner "remote-exec" {
        inline = "whoami > /tmp/whoami.txt"
    }

The issue occurred with Terraform v0.6.6 and v0.6.12. I'm running Terraform from Win 7.

The AMI used to create the EC2 instance has been hardened with the recommendations from Center for Internet Security, more specifically CIS Red Hat Enterprise Linux 7 Benchmark

Part of the hardening process sets the noexec option for the /tmp partition, which prevents scripts being run from /tmp.

Currently, Terraform generates a temporary script from the information in the Terraform file. It then copies it to /tmp using scp and does a chmod to 777 to allow everyone to read and execute it.

If finally tries to execute it by calling it directly, e.g. /tmp/terraform_1298498081.sh in the example above.

Trying to run the script fails because of the noexec option of the file system the script resides on.

However, it is possible to read the file, and /bin/sh /tmp/terraform_1298498081.sh works.

So can you please amend the Terraform code to run the script using ssh by calling /bin/sh /tmp/generated_script.sh instead of /tmp/generated_script.sh?

Thanks
Nico

enhancement provisioneremote-exec

Most helpful comment

I'm quite disappointed to find this issue from 2016 still open and unaddressed, forcing users to use poorly documented workarounds (myself included).

On a side note: I believe this should not be an enhancement request, but an issue/bug. This feature assumes users are complying with an bad security practise, which should be heavily discouraged.

馃槩

All 15 comments

Currently it is possible to upload a script with a different interpreter, like #!/usr/bin/perl. That wouldn't be possible if Terraform forced the interpreter to be /bin/sh. This would also prevent (for example) using bash on a system where /bin/sh is dash.

Perhaps we could address this instead by a new interpreter argument on the provisioner. If it isn't set then the current behavior would be preserved, requiring that the script be executable. If it _is_ set then Terraform would run the script as if it had a #! line referring to the given interpreter, so the script wouldn't need to be executable unless the interpreter itself required it.

provisioner "remote-exec" {
    inline = "whoami > /tmp/whoami.txt"
    interpreter = "/bin/sh"
}

Do you think that would solve your problem?

It looks like it would solve the issue very nicely indeed.

This issue is causing me big headaches too, any timescales on the suggested solution?

Running into this issue, with the CIS benchmarks and RHEL7.

Packer's shell provisioner can configure the temporary script locations using remote_folder, remote_file and remote_path settings, which allows me to steer packer into using a location unaffected by the problematic CIS mount options.

Would it be possible for the Terraform remote-exec provisioner to have similar settings?

I found that this has been answered with script_path as an option to the connection parameters
reference: https://www.terraform.io/docs/provisioners/connection.html

Thanks to Ian Duffy for pointing it out.
https://groups.google.com/forum/#!topic/terraform-tool/3EObUHokszI

Be warned that the script_path is not a directory. It is a file path.

For example, the following will place the script in /root instead of /tmp

provisioner "remote-exec" {
    script = "${path.module}/provision.sh"
    connection {
      script_path = "/root/provision.sh"
    }
  }

When can we have the feature to provide interpreter as @marionettist mentions available in terraform? This is causing huge nightmares to me trying to run something meant for bash, but /bin/sh points to dash instead.

interpreter = "/bin/sh"

777 is considered as bad practices for some Linux distributions or some organisations, which prevents the script being modified with 777 permission and the script cannot be run at all. Is it possible to make Terraform use 755 or a more restrict one, which shouldn't affect the ssh user that has the permission to run that script. I am happy make a pull request if it is acceptable.

Hi,

I really feel the script_path of the connection is under-documented 馃槵 and not enough links pointing to it. 馃榾

It really comes in handy for CIS type images where /tmp is mounted with noexec.

Also, possibly, the explanation around script and scripts for remote-exec could mention that connection supports script_path.

Anyway, for Ubuntu on AWS, the following worked for me (when put inside a null_resource, of course)

provisioner "remote-exec" {
  script = "scripts/redo_bootstrap.bash"

  connection {
    type        = "ssh"
    user        = "ubuntu"
    private_key = "${file("mysshkey.pem")}"
    host        = "${aws_instance.myec2.private_ip}"
    script_path = "/home/ubuntu/redo_bootstrap.bash"
  }
}

Cheers,
Shantanu

I'm quite disappointed to find this issue from 2016 still open and unaddressed, forcing users to use poorly documented workarounds (myself included).

On a side note: I believe this should not be an enhancement request, but an issue/bug. This feature assumes users are complying with an bad security practise, which should be heavily discouraged.

馃槩

This issue still exists - its especially prominent on machines configured with the 'noexec' option on the /tmp partition in order to be compliant with CIS baseline standards. It could be solved with a remote_exec_path option that would allow the inline script(s) to be executed from a directory that is something other than /tmp.

https://security.uri.edu/files/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf
REF: 1.1.4
1.1.4 Set noexec option for /tmp Partition (Scored)

The noexec mount option specifies that the filesystem cannot contain executable binaries.
Rationale:
Since the /tmp filesystem is only intended for temporary file storage, set this option to
ensure that users cannot run executable binaries from /tmp.
Audit:
Run the following commands to determine if the system is configured as recommended.
# grep "[[:space:]]/tmp[[:space:]]" /etc/fstab | grep noexec
# mount | grep "[[:space:]]/tmp[[:space:]]" | grep noexec
If either command emits no output then the system is not configured as recommended.
Remediation:
Edit the /etc/fstab file and add noexec to the fourth field (mounting options). See the
fstab(5) manual page for more information.

Uploading the script and specifying the script_path option works, however it defeats the purpose of inline.

Also running into this on Google Container Optimized OS. The only mounts that are both executable and writable are /var/lib/docker & /var/lib/cloud. In my scenario what I really want is for the remote-exec provisioner to run an existing script that I put in /var/lib/cloud/instance via cloud-init so that I can get it to use the service account credentials.

Hai, Iam facing the similar issue here. I have a .tpl file where i need to update some variable and then need to copy that file to remote machine (ec2 aws linux) and then remote execute the script. Below is the code i am using.

data "template_file" "disk_part" {
    count = var.node_count
    template = file(var.disk_part_cmd_file)
    vars = {
        disk = var.disk
        mount_location = var.mount_location
    }
}

resource "null_resource" "provisioner-disk" {
    count = var.node_count
    triggers = {
        private_ip = element(var.private_ip,count.index)
    }
    connection {
        host = element(var.private_ip,count.index)
        type = var.login_type
        user = var.user_name
        password = var.instance_password
        port = var.port
        https = "true"
        insecure = "true"
        timeout = "20m"
    }
    provisioner "file" {
        content = element(data.template_file.disk_part.*.rendered,count.index)
        destination = var.disk_part_file_destination
    }
    provisioner "remote-exec" {
        inline = var.inline_diskpart_command #command here is [chmod +x var.disk_part_file_destination, var.disk_part_file_destination]
    }
}

Error: error executing "/tmp/terraform_1771377308.sh": Process exited with status 126

Is there any work around available for this.

terraform version - 0.12.0

Adding the script path works

connection {
host = element(var.private_ip,count.index)
type = var.login_type
user = var.user_name
password = var.instance_password
port = var.port
https = "true"
insecure = "true"
timeout = "20m"
script_path = "${some_remote_location}" (should be a file ex: /home/myuser/terraform.sh)
}

This still does not fix "inline" scripts. We basically would need to use a script file even for a 1 liner command

Running into the same issue on my side:

`resource "null_resource" remoteExecProvisioner {

triggers = {
src_hash = "${data.archive_file.init.output_sha}"
}

provisioner "file" {
source = "./test.sh"
destination = "${local.scriptWorkingDir}/test.sh"
}

connection {
host = azurerm_public_ip.vm.ip_address
type = "ssh"
user = local.vm.user_name
password = data.azurerm_key_vault_secret.main.value
agent = "false"
}

provisioner "remote-exec" {
    inline = [
        "chmod +x ${local.scriptWorkingDir}/test.sh",
        "${local.scriptWorkingDir}/test.sh ${data.archive_file.init.output_sha} ${data.archive_file.init.output_sha} >> ${local.scriptWorkingDir}/helloworld.log",
    ]
}

depends_on = [azurerm_virtual_machine.vm, azurerm_network_security_group.nsg]
}`

Error: error executing "/tmp/terraform_1736122966.sh": Process exited with status 2

Was this page helpful?
0 / 5 - 0 ratings