Terraform: file provisioner behaves surprisingly when target folder not present

Created on 12 Oct 2017  路  16Comments  路  Source: hashicorp/terraform

Terraform Version

0.10.7

Terraform Configuration Files

resource "aws_instance" "master" {
  ...
  connection {
    type         = "ssh"
    host         = "${aws_instance.master.public_dns}"
    agent        = false
    user         = "ubuntu"
    private_key  = "${file(var.private_key_file)}"
  }

  provisioner "file" {
    source = "registry/config.yaml"
    destination = "/home/ubuntu/registry/config.yaml"
  }

  tags = {
    Name = "swarm-manager"
  }
}

Expected Behavior

Either

  1. config.yaml file located in directory /home/ubuntu/registry
    or
  2. error message stating that the directory doesn't exist

Actual Behavior

new file named registry located in /home/ubuntu

Steps to Reproduce

  1. terraform apply

Important Factoids

Adding the following before the file provisioner works around the issue:

  provisioner "remote-exec" {
    inline = [
       "cd /home/ubuntu",
       "sudo mkdir registry",
       "sudo chown ubuntu registry"
    ]
  }

Seems that remote-exec runs as root hence the chown command is necessary or else the file provisioner fails with permission denied.

bug provisionefile v0.10

Most helpful comment

Run into this while trying to create a kubeconfig file on a remote VM and worked around like this:

resource "null_resource" "aksvmkubeconfig" {


  #https://github.com/hashicorp/terraform/issues/16330

  provisioner "remote-exec" {
    connection {
      host        = azurerm_public_ip.aksvm.ip_address
      type        = "ssh"
      user        = var.admin_username
      private_key = tls_private_key.sshkey.private_key_pem
    }
    inline = [
      "mkdir /home/ubuntu/.kube/"
    ]
  }
  provisioner "file" {
    connection {
      host        = azurerm_public_ip.aksvm.ip_address
      type        = "ssh"
      user        = var.admin_username
      private_key = tls_private_key.sshkey.private_key_pem
    }
    content     = azurerm_kubernetes_cluster.aks.kube_config_raw
    destination = "/home/ubuntu/.kube/config"
  }
}

All 16 comments

Also ran into this today.

Same here. I would expect either an error or the directory to be automatically created.

Even more fun, if you try to create /home/ubuntu/registry/config.yaml and /home/ubuntu/registry/file2.txt the content of the latter ends up in /home/ubuntu/registry. I guess that isn't all that surprising.

Just ran into this

Still an issue in 2019...

wasted a some good hours today going through my whole configuration again and again, believing I did something fundamentally wrong when my config file was being written into the directory name rather than on the full path. I gave up occasionally and decided to search the web and found this issue...

I am quite sad and depressed right now and my neck hurts because of the stress, specially because in the docs it is actually stated, though in a different context, that The foo directory on the remote machine will be created by Terraform so it completely misguided me into believing I made the mistake and the provisioner should actually be creating the config folder that was missing.

I tested what is in the documentation. If I understood correctly, I could just use

provisioner "file" {
  source = "foo/"
  destination = "/home/user/bar"
}

with foo containing baz, that bar would be created in the remote and baz contents would be uploaded to /home/user/bar/baz:

If the source is /foo (no trailing slash), and the destination is /tmp, then the contents of /foo on the local machine will be uploaded to /tmp/foo on the remote machine. The foo directory on the remote machine will be created by Terraform.
If the source, however, is /foo/ (a trailing slash is present), and the destination is /tmp, then the contents of /foo will be uploaded directly into /tmp.

What actually happens is that baz contents is uploaded to bar, which is even more suprising since baz isn't even referenced in the provisioner.

The workaround of (deleting and) creating bar beforehand works for both uploading the file explicitly or the folder contents.

At the very least this bug shoud be acknowledged and described in the docs to prevent this issue from causing more headaches to unsuspecting beginners.

Unsuspecting beginner here ran into this today.

Still having to do workarounds to get round this. It would be great to have this behave as expected.

+1
The same problem.

+1

Run into this while trying to create a kubeconfig file on a remote VM and worked around like this:

resource "null_resource" "aksvmkubeconfig" {


  #https://github.com/hashicorp/terraform/issues/16330

  provisioner "remote-exec" {
    connection {
      host        = azurerm_public_ip.aksvm.ip_address
      type        = "ssh"
      user        = var.admin_username
      private_key = tls_private_key.sshkey.private_key_pem
    }
    inline = [
      "mkdir /home/ubuntu/.kube/"
    ]
  }
  provisioner "file" {
    connection {
      host        = azurerm_public_ip.aksvm.ip_address
      type        = "ssh"
      user        = var.admin_username
      private_key = tls_private_key.sshkey.private_key_pem
    }
    content     = azurerm_kubernetes_cluster.aks.kube_config_raw
    destination = "/home/ubuntu/.kube/config"
  }
}

Still an issue on 0.12

wow, hit this too

so instead of safely creating the ~/.ssh dir, and putting a single ssh key's pub file into authorized_keys, the easiest workaround, is to copy my entire ~/.ssh dir?

resource "null_resource" "controllerpi" {
  connection {
    type = "ssh"    
    user = var.initial_user
    password = var.initial_password
    host = "10.10.10.129"
  }

  provisioner "file" {
    #TODO: this is an aweful workaround to https://github.com/hashicorp/terraform/issues/16330
    # source      = "~/.ssh/id_rsa.pub"
    # destination = "/home/pi/.ssh/authorized_keys"
    source      = "~/.ssh"
    destination = "/home/pi/.ssh"
  }
}

mildly surprised

+1
happened to me as well

Was this page helpful?
0 / 5 - 0 ratings