Terraform v0.12.8
main.tf
resource "vsphere_virtual_machine" "vm" {
count = var.number_of_vms
name = "${var.vsphere_vm_name}-${var.env}00${count.index + 1}"
resource_pool_id = data.vsphere_resource_pool.pool.id
datastore_id = data.vsphere_datastore.datastore.id
folder = var.vsphere_folder
num_cpus = var.num_cpus
memory = var.memory
guest_id = var.guest_id
cpu_hot_add_enabled = "true"
memory_hot_add_enabled = "true"
scsi_type = "lsilogic"
network_interface {
network_id = data.vsphere_network.network.id
}
disk {
label = "disk0"
size = "10"
thin_provisioned = "false"
}
clone {
template_uuid = data.vsphere_virtual_machine.template.id
customize {
dns_server_list = var.dns_servers
dns_suffix_list = [var.vsphere_domain]
ipv4_gateway = var.default_gateway
linux_options {
host_name = "${var.vsphere_hostname}-${var.env}00${count.index + 1}"
domain = var.vsphere_domain
}
network_interface {
ipv4_address = var.host_ips[count.index]
ipv4_netmask = 24
}
}
}
provisioner "salt-masterless" {
skip_bootstrap = true
local_state_tree = var.salt_dir
remote_state_tree = "/srv/salt"
no_exit_on_failure = true
connection {
host = self.default_ip_address
type = "ssh"
user = data.vault_generic_secret.vsphere_build_user.data["username"]
password = data.vault_generic_secret.vsphere_build_user.data["password"]
}
}
}
variables.tf
variable "vsphere_resource_pool" {
default = "Resources"
}
variable "salt_dir" {
type = string
default = "./salt"
}
variable "vsphere_network" {
default = "network"
}
variable "num_cpus" {
default = "2"
}
variable "memory" {
default = "4096"
}
variable "vsphere_datacenter" {
default = "Company"
}
variable "vsphere_domain" {
default = "xxx.yyy.co.uk"
}
variable "vsphere_vm_name" {
default = "test"
}
variable "vsphere_datastore" {
default = "Storage"
}
variable "guest_id" {
default = "centos7Guest"
}
variable "vsphere_template" {
default = "centos7base"
}
variable "vsphere_hostname" {
default = "test"
}
variable "host_ips" {
type = list(string)
default = ["10.1.2.65"]
}
variable "default_gateway" {
default = "10.1.2.254"
}
variable "dns_servers" {
type = list(string)
default = ["10.1.2.10","10.1.2.11"]
}
variable "number_of_vms" {
default = 1
}
variable "env" {
default = "t"
}
variable "vsphere_folder" {
default = "Devops"
}
data.tf
data "vsphere_datacenter" "dc" {
name = var.vsphere_datacenter
}
data "vsphere_virtual_machine" "template" {
name = var.vsphere_template
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_datastore" "datastore" {
name = var.vsphere_datastore
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_resource_pool" "pool" {
name = var.vsphere_resource_pool
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_network" "network" {
name = var.vsphere_network
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vault_generic_secret" "vsphere_user" {
path = "secret/vsphere/user_credentials/vsphereadmin"
}
data "vault_generic_secret" "vsphere_build_user" {
path = "secret/vsphere/user_credentials/vspherebuild"
}
provider.tf
provider "vsphere" {
user = data.vault_generic_secret.vsphere_user.data["username"]
password = data.vault_generic_secret.vsphere_user.data["password"]
vsphere_server = "xxxredactedxxx"
# If you have a self-signed cert
allow_unverified_ssl = true
}
provider "vault" {
address = "xxxredactedxxx"
token = "xxxredactedxxx"
}
https://gist.github.com/kmoorfield/cf5380fd96f50364e6d7d4540c9b218f
No crash log
Argument "local_state_tree" within "salt-masterless" provisioner should have been expanded to the string default
Argument "local_state_tree" within "salt-masterless" provisioner" errored on trying to expand the variable when doing a terraform plan.
Seems like it's looking for a hash of the directory name not the actual directory name specified within the default value.
terraform initterraform validate
No - using the standard terraform binary
Not that I have found
I have a similar issue. When I am creating terraform module with ec2 instance and salt-masterless, remote-exec provisioners. I'm trying to pass file paths of some configuration files to module via variables but terraform fails to find file and produces error. I have tried passing hard-coded absolute path and still the same error. It worked when I hardcoded the path directly inside provisioner.
Error: local_state_tree: path '74D93920-ED26-11E3-AC10-0800200C9A66' is invalid: stat 74D93920-ED26-11E3-AC10-0800200C9A66: no such file or directory
Error: local_pillar_roots: path '74D93920-ED26-11E3-AC10-0800200C9A66' is invalid: stat 74D93920-ED26-11E3-AC10-0800200C9A66: no such file or directory
Error: minion_config_file: path '74D93920-ED26-11E3-AC10-0800200C9A66' is invalid: stat 74D93920-ED26-11E3-AC10-0800200C9A66: no such file or directory
I've encountered this same issue with Terraform 0.12.21 and have a workaround where I hardcode the input to local_state_tree and local_pillar_roots. This implies to me that the salt-masterless provisioner in TF 0.12.21 is not evaluating the input as an expression like it used to do in TF 0.11.14
I have the same problem as @Miszel66 in Terraform 0.12.24. Would be nice to see this fixed.
Yeah same kind of error:
Error: local_state_tree: path '74D93920-ED26-11E3-AC10-0800200C9A66' is invalid: stat 74D93920-ED26-11E3-AC10-0800200C9A66: no such file or directory
vagrant@libvirt:~/setup/libvirt$ terraform version
Terraform v0.12.28
+ provider.external v1.2.0
+ provider.libvirt (unversioned)
+ provider.template v2.1.2
Still the same issue on Terraform 0.13.0.
I'm closing this issue because we announced tool-specific (vendor or 3rd-party) provisioner deprecation in mid-September 2020. Additionally, we added a deprecation notice for tool-specific provisioners in 0.13.4. On a practical level this means we will no longer be reviewing or merging PRs for these built-in plugins.
The discuss post linked above explains this in more depth, but the basic reason we're making this change is that these vendor provisioners have been extremely challenging for us to maintain, and are a weak spot in the terraform user experience. People reach for them not realizing the bugs and UX limitations, and they're areas that are difficult for us to maintain because of the huge surface area of integrating with a bunch of different tools (Puppet, Chef, Salt, etc) that each require deep domain knowledge to do right. For example, testing each of these against all the versions of those tools, on multiple platforms, is prohibitive, and so we don't - but users have a reasonable expectation that everything in the Terraform Core codebase is well tested.
For the time being, the best option if you want to see this fixed, is to build a standalone provisioner, fix this in it, and distribute it as a plugin binary, similar to how the ansible provisioner is distributed.
I'm aware of the limitations of this approach, but it's the best option compared to coupling provisioner development to the Terraform Core release lifecycle. We believe the benefit to users of having provisioner development decoupled from core, exceeds the convenience of having these provisioners built in to core. We want to provide a better user experience in the future, and our hope here is that the ability to improve, fix and repair provisioners without us blocking their development, much like providers, will help make a strong case for what's next.
I think it’s also important to highlight that we have no plans to remove the generic provisioners or the pluggable functionality during Terraform's 1.0 lifecycle.
I appreciate your input here to improve Terraform, and am always happy to talk. Please feel free to reach out to me or Petros Kolyvas if you would like to talk more about this change.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.