In an ideal world, provisioners are inherent to the immutable resource (AMI), and changing a provisioner would mean simply replacing the AMI entirely.
While this is certainly the most ideal scenario, there exists a much more common scenario where there is a desire to provision (or re-provision) one or more instances. This is especially helpful for development/iteration where waiting for an entire cluster to taint and recreate itself is time consuming and not an ideal workflow.
Notes:
terraform provision aws_instance.web.*
)I would be happy to flush out the API a bit more, but I wanted to open a ticket for early discussion before going too far.
/cc @phinze @catsby
Yeah this makes sense. We should support the common workflow of iterating on provisioners, and this feature seems like a relatively simple way to do it. Tagged.
In certain cases it's possible to "fake" this using null_resource
:
resource "aws_instance" "foo" {
// ...
}
resource "null_resource" "baz" {
connection {
user = "ubuntu"
private_key = "..."
host = "${aws_instance.foo.private_ip}"
}
provisioner "remote-exec" {
// ... etc
}
}
With this in place, one can taint null_resource.baz
to get that provisioner to re-run on the next apply
without rebuilding the instance.
It's also possible to add a triggers
attribute to the null_resource
so that it will re-run automatically when certain attributes change. At work we are currently using this to run consul join
on our Consul cluster each time the set of all Consul server IP addresses changes, so rebuilding a single server will automatically add the replacement server to the cluster.
As per @apparentlymart's suggestion, here're my use-case details:
I'm using Salt (in a masterless configuration) to provision a node at runtime with a few remote-exec
provisioners (used to bootstrap Salt, to create new directories and tell Salt to look for the state tree in the local file system). There're also a number of file directives whose purpose is to create new directories and copy files into the node, such as one minion and one top.sls, as well as a number of init.sls files. Salt will then apply all declared states, which include installing nginx, a number of php and database-related packages, as well as managing a number of files, symlinks, etc.
Currently, when I commit a change to my server's configuration or need to install new software I have to destroy the whole infrastructure and then apply the new plan. It doesn't matter if what I want to change is just a single line, in nginx.conf
for instance, as I will need to destroy the whole thing. It would be great if there was an equivalent to vagrant provision
.
@pierrebonbon Wouldn't it make more sense, in this case, to use a master_ful_ Salt setup and the null_resource
pattern above to run a Salt highstate
targeted to the changed machine from triggers
?
+1 on this. I use chef to provision all of my vms, and occasionally, the provision step will fail, which ultimately means that terraform will list the resource as tainted and will need to destroy and re-create it. A huge time waster.
Absolutely. I understand the resources should be immutabile, but providing a solution for debugging and development purposes would be exteremely useful.
Hi,
I implemented http
data-source in this PR. We use this for getting current version of Ansible playbooks for particular microservice and null_recourse
+ triggers
to provision them. There is an example:
data "http" "example" {
url = "https://checkpoint-api.hashicorp.com/v1/check/terraform"
# Optional request headers
request_headers {
"Accept" = "application/json"
}
}
resource "aws_instance" "ec2" {
# ...
}
resource "null_resource" "ec2-provisioner" {
triggers {
version = "${data.http.example.body}"
}
provisioner "remote-exec" {
connection {
# ...
}
inline = [
"ansible-playbook -i inventory playbook.yml --extra-vars 'foo=bar'",
]
}
}
So, at the end Terraform will trigger Ansible only when metadata at https://checkpoint-api.hashicorp.com/v1/check/terraform
has been changed.
Or for "development mode" you could use version = "${timestamp()}"
approach.
@sethvargo Why was this closed?
I still believe a terraform provision -target <some.target>
to relaunch provisioning of a resource would be a great addition to Terraform to get rid of null_resource
hacks and their side effects…
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@sethvargo Why was this closed?