Hello everyone,
I have been dealing with a very frustrating issue and I see it's been around for a while but I can not find correct solution.
Here is an example of what I'm doing:
Part of my network module looks like this:
#create public ip (ip_type: static or dynamic)
resource "azurerm_public_ip" "public_ip" {
name = "${format("%s-public_ip-%02d", var.name, count.index+1)}"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
count = "${var.vm_count}"
public_ip_address_allocation = "${var.public_ip_type}"
}
There, I have outputs that looks like this:
output "pip" {
value = "${element(azurerm_public_ip.public_ip.fqdn)}"
}
And then in my provisioner code I call this with: (have a variable for it)
public_ip = "${module.network.pip}"
Terraform plan runs fine, but when it provisioner kicks in this is the output that I am getting:
module.vm.azurerm_virtual_machine.vm (chef): Connecting to remote host via SSH...
module.vm.azurerm_virtual_machine.vm (chef): Host: 74D93920-ED26-11E3-AC10-0800200C9A66
module.vm.azurerm_virtual_machine.vm (chef): User: cq
module.vm.azurerm_virtual_machine.vm (chef): Password: true
module.vm.azurerm_virtual_machine.vm (chef): Private key: false
module.vm.azurerm_virtual_machine.vm (chef): SSH Agent: false
I would appreciate any help with this.
Thanks a lot!
Hi @aleksap,
Can you show what the provisioner config block looks like, and a little more context in how you're passing the value to the provisioner?
The uuid 74D93920-ED26-11E3-AC10-0800200C9A66 is actually an internal string terraform uses for an unknown value, and I'm not sure how it's getting to your provisioner as the host.
Hello @jbardin ,
Here is more info on what I'm doing.
@abhijeetgaiha ,
Here is my network module:
# create virtual network
resource "azurerm_virtual_network" "network" {
name = "${format("%s-vnet", var.name)}"
address_space = ["${var.address_space}"]
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
}
# create subnets
resource "azurerm_subnet" "subnets" {
count = "${length(var.subnet_cidrs)}"
name = "${format("%s-subnet-%02d", var.name, count.index+1)}"
resource_group_name = "${var.resource_group_name}"
virtual_network_name = "${azurerm_virtual_network.network.name}"
address_prefix = "${var.subnet_cidrs[count.index]}"
}
#create public ip (ip_type: static or dynamic)
resource "azurerm_public_ip" "public_ip" {
name = "${format("%s-public_ip-%02d", var.name, count.index+1)}"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
#count = "${var.vm_count}"
domain_name_label = "${var.domain_name}"
public_ip_address_allocation = "${var.public_ip_type}"
}
# create nic
resource "azurerm_network_interface" "network" {
#count = "${length(var.subnet_cidrs)}"
name = "${format("%s-network-interface%02d", var.name, count.index+1)}"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
ip_configuration {
name = "${format("%s-public_ip", var.name)}"
subnet_id = "${element(azurerm_subnet.subnets.*.id, count.index)}"
private_ip_address_allocation = "${var.private_ip_type}"
public_ip_address_id = "${azurerm_public_ip.public_ip.id}"
}
data "azurerm_public_ip" "datasourceip" {
name = "testing"
resource_group_name = "${var.resource_group_name}"
}
}
Here are all the outputs that I have tried in order to get correct ip/hostname that my chef provisioner could connect to:
output "id" {
value = ["${azurerm_network_interface.network.*.id}"]
}
output "pip" {
value = "${element(azurerm_public_ip.public_ip.fqdn)}"
}
output "pubip" {
value = "${element(azurerm_public_ip.public_ip.ip_address)}"
}
output "datasource" {
value = "${data.azurerm_public_ip.datasourceip.ip_address}"
}
Output "id" works fine when it's called from template file.
Output pip or pubip don't give me either ip_address or fqdn.. they give me random value, for ex:
module.vm.azurerm_virtual_machine.vm (chef): Connecting to remote host via SSH... module.vm.azurerm_virtual_machine.vm (chef): Host: 74D93920-ED26-11E3-AC10-0800200C9A66 module.vm.azurerm_virtual_machine.vm (chef): User: cq module.vm.azurerm_virtual_machine.vm (chef): Password: true module.vm.azurerm_virtual_machine.vm (chef): Private key: false module.vm.azurerm_virtual_machine.vm (chef): SSH Agent: false
When output "datasource" is called I get error:
output 'datasource': unknown resource 'data.azurerm_public_ip.datasourceip' referenced in variable data.azurerm_public_ip.datasourceip.ip_address
In my provisioner I would call output from network module.. all of them above..
Sorry, I'm not following here.
Your "pip" and "pubip" outputs are invalid, because they are using the element function without a list or an index.
I'm guessing the "datasource" output doesn't work because "datasourceip" is inside the "azurerm_network_interface" "network" resource, but that's also invalid and should fail early on before you even get to the output error.
You also haven't included the actual provisioner config here. It would help if you could provide a full reproduction of the issue to better help you.
I'm guessing that there is are errors in the configuration which we aren't surfacing correctly, since you should not be seeing the special UnknownVariableValue.
@jbardin ,
Thank you for your reply.
I have tried pip and pubip outputs without elements also, I get the same error.
Here is my connection provisioner:
connection {
host = "${var.public_ip}"
agent = "${var.connect_agent}"
user = "${var.user_name}"
#password = "${var.user_password}"
private_key = "${file(var.ssh_key)}"
timeout = "${var.timeout}"
}
Here is config file (values not included are default):
public_ip = "${module.network.pip}"
user_name = "someuser"
#user_password = "SomePassword"
ssh_key = "/home/user/mykey/.ssh/id_rsa"
Public_ip variable is the one where I was passing different outputs from network module mentioned above.
Thanks,
Alex
hi @jbardin
I'm facing a similar issue as well.
Use Case:
Create 2 or more VMs in Azure and run provisioning scripts once VM is UP
Terraform version : 0.9.11
Network resources:
~~~
resource "azurerm_public_ip" "apubip" {
count = "${var.node_count}"
name = "azure-${var.user_prefix}-pub-ip-${count.index}"
location = "${var.region}"
resource_group_name = "${azurerm_resource_group.rg.name}"
public_ip_address_allocation = "static"
}
resource "azurerm_network_interface" "anetint" {
name = "net-intf-${var.user_prefix}-${count.index}"
count = "${var.node_count}"
location = "${var.region}"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "ip-conf-${var.user_prefix}"
subnet_id = "${azurerm_subnet.asubnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${element(azurerm_public_ip.apubip.*.id, count.index)}"
}
}
~~~
VM definition:
~~~
resource "azurerm_virtual_machine" "avm" {
name = "azure-vm-${var.user_prefix}-${count.index}"
count = "${var.node_count}"
location = "${var.region}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${element(azurerm_network_interface.anetint.*.id, count.index)}"]
vm_size = "${var.vm_size}"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "${var.vm_image_publisher}"
offer = "${var.vm_image_offer}"
sku = "${var.vm_image_sku}"
version = "${var.vm_image_version}"
}
storage_os_disk {
name = "osdisk${count.index + 1}"
vhd_uri = "${azurerm_storage_account.astgacc.primary_blob_endpoint}${azurerm_storage_container.astgctnr.name}/osdisk${count.index + 1}.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "px-azure-node-${count.index + 1}"
admin_username = "${var.vm_admin_user}"
admin_password = "${var.vm_admin_password}"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags {
environment = "${var.user_prefix}"
}
provisioner "file" {
source = "scripts/post_install.sh"
destination = "/tmp/post_install.sh"
connection {
type = "ssh"
user = "${var.vm_admin_user}"
password = "${var.vm_admin_password}"
}
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/post_install.sh",
"/tmp/post_install.sh ${var.ent_uuid}"
]
}
}
~~~
Upon applying terraform configs, I'm getting below error:
~~~
Error applying plan:
2 error(s) occurred:
azurerm_virtual_machine.avm[1]: 1 error(s) occurred:
dial tcp :22: connectex: No connection could be made because the target machine actively refused it.
azurerm_virtual_machine.avm[0]: 1 error(s) occurred:
dial tcp :22: connectex: No connection could be made because the target machine actively refused it.
~~~
This is not a port 22 access problem as the VM is accessible from terminal but somehow the provisioner cannot access the VM
I have tried static as well as dynamic IP address allocation strategy for public IP but both fail with above error.
I managed to make it work by changing my provisioning section as follows:
~~~
connection {
type = "ssh"
host = "${element(azurerm_public_ip.apubip.*.ip_address, count.index)}"
user = "${var.vm_admin_user}"
password = "${var.vm_admin_password}"
}
provisioner "file" {
source = "scripts/post_install.sh"
destination = "/tmp/post_install.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/post_install.sh",
"/tmp/post_install.sh ${var.ent_uuid}"
]
}
~~~
You must explicitly specify the host as described above- worked for me
Hi Harshal and Jakexx360, could you please share the complete sample code as I am getting the same connection issue.
Guys,
Here is my code:
resource "azurerm_public_ip" "makterraformpublicip" {
name = "makPublicIP"
location = "eastus"
resource_group_name = "${azurerm_resource_group.makterraformgroup.name}"
public_ip_address_allocation = "dynamic"
}
resource "azurerm_network_security_group" "makterraformnsg" {
name = "makNetworkSecurityGroup"
location = "eastus"
resource_group_name = "${azurerm_resource_group.makterraformgroup.name}"
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_virtual_machine" "maktest" {
name = "maktest"
location = "eastus"
resource_group_name = "${azurerm_resource_group.makterraformgroup.name}"
network_interface_ids = ["${azurerm_network_interface.makterraformnic.id}"]
vm_size = "Standard_DS1_v2"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_os_disk {
name = "makOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
storage_data_disk {
name = "datadisk_new"
managed_disk_type = "Standard_LRS"
create_option = "Empty"
lun = 0
disk_size_gb = "50"
}
os_profile {
computer_name = "xxxxxxxxxxxxx"
admin_username = "xxxxxxxx"
admin_password = "xxxxxxxxx"
}
os_profile_linux_config {
disable_password_authentication = false
}
connection {
host = "${azurerm_public_ip.makterraformpublicip.ip_address}"
agent = false
user = "xxxxxxxxxxxx"
password = "xxxxxxxxxxxxx"
timeout = "180s"
}
provisioner "file" {
source = "/home/vinay/myexperiments/azure/azure-mount/diskmount.sh"
destination = "/home/azureuser/temp/diskmount.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/azureuser/temp/diskmount.sh"
]
}
}
Having the same issue as about with Terraform v0.12.2 --can connect to VM via SSH.
here is my code
/*
variable "location" {
default = "westus2"
}
provider "azurerm" {
version = "=1.31.0"
}
resource "azurerm_resource_group" "rg" {
name = "${var.resource_prefix}TFResourceGroup"
location = "${var.location}"
}
resource "azurerm_virtual_network" "vnet" {
name = "${var.resource_prefix}TFVnet"
address_space = ["10.0.0.0/16"]
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
}
resource "azurerm_subnet" "subnet" {
name = "${var.resource_prefix}TFSubnet"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.1.0/24"
}
resource "azurerm_public_ip" "jerryip" {
name = "${var.resource_prefix}TFPublicIP"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
allocation_method = "Dynamic"
}
resource "azurerm_network_security_group" "nsg" {
name = "${var.resource_prefix}TFNSG"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = ""
destination_port_range = "22"
source_address_prefix = ""
destination_address_prefix = "*"
}
}
resource "azurerm_network_interface" "nic" {
name = "${var.resource_prefix}NIC"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg.id}"
ip_configuration {
name = "${var.resource_prefix}NICConfg"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.jerryip.id}"
}
}
resource "azurerm_virtual_machine" "vm" {
name = "${var.resource_prefix}TFVM"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "${var.resource_prefix}OsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
os_profile {
computer_name = "${var.resource_prefix}TFVM"
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
}
os_profile_linux_config {
disable_password_authentication = false
}
connection {
host = "${azure_public_ip.jerryip.ip_address}"
agent = false
user = "${var.admin_username}"
password = "${var.admin_password}"
timeout = "180s"
}
provisioner "file" {
source = "newfile.txt"
destination = "newfile.txt"
}
provisioner "remote-exec" {
inline = [
"ls -a",
"cat newfile.txt"
]
}
}
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I managed to make it work by changing my provisioning section as follows:
~~~
connection {
type = "ssh"
host = "${element(azurerm_public_ip.apubip.*.ip_address, count.index)}"
user = "${var.vm_admin_user}"
password = "${var.vm_admin_password}"
}
provisioner "file" {
source = "scripts/post_install.sh"
destination = "/tmp/post_install.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/post_install.sh",
"/tmp/post_install.sh ${var.ent_uuid}"
]
}
~~~