Terraform: Azure Official Example fails when augmenting it with a count logic to support VM scaling.

Created on 9 Feb 2017  ·  16Comments  ·  Source: hashicorp/terraform

Hello,

I took the Azure example and added a count parameter to the objects that needed them. when I increase the count from 1 to 2, I get an error with the creation of the second VM. Terraform will try to recreate the VM.0 and this fails with a disk error.

Terraform Version

Terraform v0.8.6

Affected Resource(s)

  • azurerm_virtual_machine
  • azurerm_storage_container
  • azurerm_storage_account

Possibly a core issue since it looks similar as : https://github.com/hashicorp/terraform/issues/3449

Terraform Configuration Files

variable "counts" {}

provider "azurerm" {
  subscription_id = "<removed>"
  client_id       = "<removed>"
  client_secret   = "<removed>"
  tenant_id       = "<removed>"
}

resource "azurerm_resource_group" "test" {
    name = "acctestrg"
    location = "West US"
}

resource "azurerm_virtual_network" "test" {
    name = "acctvn"
    address_space = ["10.0.0.0/16"]
    location = "West US"
    resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_subnet" "test" {
    name = "acctsub"
    resource_group_name = "${azurerm_resource_group.test.name}"
    virtual_network_name = "${azurerm_virtual_network.test.name}"
    address_prefix = "10.0.2.0/24"
}

resource "azurerm_network_interface" "test" {
    count = "${var.counts}"
    name = "acctni${count.index}"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.test.name}"

    ip_configuration {
        name = "testconfiguration1"
        subnet_id = "${azurerm_subnet.test.id}"
        private_ip_address_allocation = "dynamic"
    }
}

resource "azurerm_storage_account" "test" {
    count = "${var.counts}"
    name = "accsai${count.index}"
    resource_group_name = "${azurerm_resource_group.test.name}"
    location = "westus"
    account_type = "Standard_LRS"

    tags {
        environment = "staging"
    }
}

resource "azurerm_storage_container" "test" {
    count = "${var.counts}"
    name = "vhds"
    resource_group_name = "${azurerm_resource_group.test.name}"
    storage_account_name = "${azurerm_storage_account.test.*.name[count.index]}"
    container_access_type = "private"
}

resource "azurerm_virtual_machine" "test" {
    count = "${var.counts}"
    name = "acctvm${count.index}"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.test.name}"
    network_interface_ids = ["${azurerm_network_interface.test.*.id[count.index]}"]
    vm_size = "Standard_A0"

    storage_image_reference {
        publisher = "Canonical"
        offer = "UbuntuServer"
        sku = "14.04.2-LTS"
        version = "latest"
    }

    storage_os_disk {
        name = "myosdisk1"
        vhd_uri = "${azurerm_storage_account.test.*.primary_blob_endpoint[count.index]}${azurerm_storage_container.test.*.name[count.index]}/myosdisk1.vhd"
        caching = "ReadWrite"
        create_option = "FromImage"
    }

    os_profile {
        computer_name = "hostname${count.index}"
        admin_username = "testadmin"
        admin_password = "Password1234!"
    }

    os_profile_linux_config {
        disable_password_authentication = false
    }

    tags {
        environment = "staging"
    }
}

Debug Output

The terraform plan output
https://gist.github.com/djsly/639fc1b039db89fce5bcafe3fc53a165
The terraform apply output
https://gist.github.com/djsly/eaf7e1e3786915dd2a7e285db0f0b7c0

Expected Behavior

We should be getting only new resources while having the existing resources untouched.

Actual Behavior

The VM0 object gets recreated and fails while trying to recreate the osdisk0 since it already exists.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. TF_VAR_count=1 terraform plan apply
  2. TF_VAR_count=2 terraform plan apply

References

Could be related to (unsure)

  • GH-3449
bug provideazurerm waiting-response

All 16 comments

HI @grubernaut and @stack72 , how can we know if this is an Azure Specific bug or Core issues related to GH-3449 ?

I would like to start looking at why it is failing so know in advance where to look for would be appreciated! :)

Thanks!

Hi @djsly

I believe it you change your azurerm_virtual_machine resource to look as follows:


    storage_os_disk {
        name = "myosdisk1"
        vhd_uri = "${azurerm_storage_account.test.*.primary_blob_endpoint[count.index]}${azurerm_storage_container.test.*.name[count.index]}/myosdisk[count.index].vhd"
        caching = "ReadWrite"
        create_option = "FromImage"
    }

Then the example will work. Effectively you are trying to create the X machines in the same storage account so when it tries to create the second machine, it has the same VHD name

Paul

Thanks @stack72 for the input!

The storage account primary_blob_endpoint is using the [count.index] though. only the vhd name is kept. I believe that having the same name in different storage account wouldn't be something that would have bothered ...

I tried with your setting for the os storage disk section and I still got this.

    storage_os_disk.#:                                                  "1" => "1"
    storage_os_disk.633700910.create_option:                            "FromImage" => ""
    storage_os_disk.633700910.disk_size_gb:                             "0" => "0"
    storage_os_disk.633700910.image_uri:                                "" => ""
    storage_os_disk.633700910.name:                                     "myosdisk1" => ""
    storage_os_disk.633700910.os_type:                                  "" => ""
    storage_os_disk.633700910.vhd_uri:                                  "https://accsai0.blob.core.windows.net/vhds/myosdisk[count.index].vhd" => "" (forces new resource)
    storage_os_disk.~745310598.caching:                                 "" => "ReadWrite"
    storage_os_disk.~745310598.create_option:                           "" => "FromImage"
    storage_os_disk.~745310598.disk_size_gb:                            "" => ""
    storage_os_disk.~745310598.image_uri:                               "" => ""
    storage_os_disk.~745310598.name:                                    "" => "myosdisk1"
    storage_os_disk.~745310598.os_type:                                 "" => ""
    storage_os_disk.~745310598.vhd_uri:                                 "" => "${azurerm_storage_account.test.*.primary_blob_endpoint[count.index]}${azurerm_storage_container.test.*.name[count.index]}/myosdisk[count.index].vhd" (forces new resource)

It really seems like the vhd_uri doesn't perform the replacement of the variables before comparing ?

I updated again the storage_os_disk with a small difference from what you provided

myosdisk[count.index].vhd --> myosdisk${count.index}.vhd

Since this was keeping the vhd name identical between VMs (similar effect as my original setting)

Unfortunately, I am seeing the same behaviour.

    storage_image_reference.#:                                          "1" => "1"
    storage_image_reference.1807630748.offer:                           "UbuntuServer" => "UbuntuServer"
    storage_image_reference.1807630748.publisher:                       "Canonical" => "Canonical"
    storage_image_reference.1807630748.sku:                             "14.04.2-LTS" => "14.04.2-LTS"
    storage_image_reference.1807630748.version:                         "latest" => "latest"
    storage_os_disk.#:                                                  "1" => "1"
    storage_os_disk.2308889283.create_option:                           "FromImage" => ""
    storage_os_disk.2308889283.disk_size_gb:                            "0" => "0"
    storage_os_disk.2308889283.image_uri:                               "" => ""
    storage_os_disk.2308889283.name:                                    "myosdisk0" => ""
    storage_os_disk.2308889283.os_type:                                 "" => ""
    storage_os_disk.2308889283.vhd_uri:                                 "https://accsai0.blob.core.windows.net/vhds/myosdisk0.vhd" => "" (forces new resource)
    storage_os_disk.~2679590933.caching:                                "" => "ReadWrite"
    storage_os_disk.~2679590933.create_option:                          "" => "FromImage"
    storage_os_disk.~2679590933.disk_size_gb:                           "" => ""
    storage_os_disk.~2679590933.image_uri:                              "" => ""
    storage_os_disk.~2679590933.name:                                   "" => "myosdisk0"
    storage_os_disk.~2679590933.os_type:                                "" => ""
    storage_os_disk.~2679590933.vhd_uri:                                "" => "${azurerm_storage_account.test.*.primary_blob_endpoint[count.index]}${azurerm_storage_container.test.*.name[count.index]}/myosdisk${count.index}.vhd" (forces new resource)

I compiled terraform 0.8-stable after adding an extra line of LOG here

diff --git a/builtin/providers/azurerm/resource_arm_virtual_machine.go b/builtin/providers/azurerm/resource_arm_virtual_machine.go
index 197a10e..d9f243c 100644
--- a/builtin/providers/azurerm/resource_arm_virtual_machine.go
+++ b/builtin/providers/azurerm/resource_arm_virtual_machine.go
@@ -770,7 +770,7 @@ func resourceArmVirtualMachineStorageOsDiskHash(v interface{}) int {
        m := v.(map[string]interface{})
        buf.WriteString(fmt.Sprintf("%s-", m["name"].(string)))
        buf.WriteString(fmt.Sprintf("%s-", m["vhd_uri"].(string)))
-
+       log.Printf("[INFO] Os String used for Hash %s", buf.String())
        return hashcode.String(buf.String())
 }

I get this now

2017/02/13 17:01:58 [DEBUG] plugin: terraform: azurerm-provider (internal) 2017/02/13 17:01:58 [INFO] Os String used for Hash myosdisk0-https://accsai0.blob.core.windows.net/vhds/myosdisk0.vhd-
2017/02/13 17:01:58 [DEBUG] plugin: terraform: azurerm-provider (internal) 2017/02/13 17:01:58 [INFO] Os String used for Hash myosdisk0-${azurerm_storage_account.test.*.primary_blob_endpoint[count.index]}${azurerm_storage_container.test.*.name[count.index]}/myosdisk${count.index}.vhd-
20

It seems that the Hash that gets calculated is not expanding the variables. The what is present value reflects the updated string with variables expanded, while the what the user wants is still pre-expanded :(

Would it be a fair assumption to expand the string before comparing ?

I dig a little deeper and realized that the following code reads the Merged part for the server side (after querying the values from the Azure API) while it read the second part from the Config using the Exact Value.

I also realized that the expandable variables provided might not have their final values during the plan phase. I'm not really sure how this could work in the end, unless we mark the vhd_uri as computed ?

````
func (d *ResourceData) getChange(
k string,
oldLevel getSource,
newLevel getSource) (getResult, getResult) {
var parts, parts2 []string
if k != "" {
parts = strings.Split(k, ".")
parts2 = strings.Split(k, ".")
}

o := d.get(parts, oldLevel)
n := d.get(parts2, newLevel)
return o, n

}

func (d *ResourceData) get(addr []string, source getSource) getResult {
d.once.Do(d.init)

level := "set"
flags := source & ^getSourceLevelMask
exact := flags&getSourceExact != 0
source = source & getSourceLevelMask
if source >= getSourceSet {
    level = "set"
} else if source >= getSourceDiff {
    level = "diff"
} else if source >= getSourceConfig {
    level = "config"
} else {
    level = "state"
}

var result FieldReadResult
var err error
if exact {
    result, err = d.multiReader.ReadFieldExact(addr, level)
} else {
    result, err = d.multiReader.ReadFieldMerge(addr, level)
}
if err != nil {
    panic(err)
}

````

@stack72, I agree about @djsly's point - the value that resourceArmVirtualMachineStorageOsDiskHash (the Set function for "storage_os_disk") is receiving from Terraform during the planing phase has not been interpolated, e.g. it looks like this:

map[string]interface {}{
"vhd_uri":"${azurerm_storage_account.test..primary_blob_endpoint[count.index]}${azurerm_storage_container.test..name[count.index]}/myosdisk.vhd",
"image_uri":"",
...
}

I imagine this isn't evaluated because azurerm_storage_account.test doesn't exist yet? Is there something the configuration should be doing differently? Thx.

@tombuildsstuff Do you have any comments on this? Thanks.

@StephenWeatherford apologies for the delay here - I'm going to spend some time on this early next week

@tombuildsstuff Thanks, I appreciate it. I'm sure you're busy.

Hi @djsly

Thanks for raising this issue - apologies for the delayed response here.

I've used the configuration posted above with a build of Terraform from Master (and also Terraform 0.9.6) - and I believe this issue has been fixed. Here's the output of scaling up from 1 -> 2 machines and here's the output scaling down from 2 -> 1 - which shows the second VM being the only thing being added/removed in both cases.

Would it be possible for you to confirm if upgrading to Terraform v0.9.6 solves your issue?

Thanks!

Thanks @tombuildsstuff I will try to find sometime this week to look at this.
Currently we are using the following to bypass the limitation. I will try to upgrade terraform and see if we still need those lifecycle--ignore_changes

lifecycle { ignore_changes = ["storage_os_disk", "os_profile", "storage_data_disk"] }

ok I tested and it is working fine.

One very minor issue though.

the plan command shows 4 resources to be added BUT the summary tells us 8
````

  • azurerm_network_interface.test.1
  • azurerm_storage_account.test.1
  • azurerm_storage_container.test.1
  • azurerm_virtual_machine.test.1
    Plan: 8 to add, 0 to change, 0 to destroy.
    ````

````
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

azurerm_resource_group.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-7f47533b4dd6/resourceGroups/slytest)
azurerm_virtual_network.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...rosoft.Network/virtualNetworks/slyvnet)
azurerm_storage_account.test.0: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...crosoft.Storage/storageAccounts/slysa0)
azurerm_storage_container.test.0: Refreshing state... (ID: vhds)
azurerm_subnet.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...ualNetworks/slyvnet/subnets/slyvnetsub)
azurerm_network_interface.test.0: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...osoft.Network/networkInterfaces/slyni0)
azurerm_virtual_machine.test.0: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...crosoft.Compute/virtualMachines/slyvm0)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

  • azurerm_network_interface.test.1
    applied_dns_servers.#: ""
    dns_servers.#: ""
    enable_ip_forwarding: "false"
    internal_dns_name_label: ""
    internal_fqdn: ""
    ip_configuration.#: "1"
    ip_configuration.3587266797.load_balancer_backend_address_pools_ids.#: ""
    ip_configuration.3587266797.load_balancer_inbound_nat_rules_ids.#: ""
    ip_configuration.3587266797.name: "testconfiguration1"
    ip_configuration.3587266797.private_ip_address: ""
    ip_configuration.3587266797.private_ip_address_allocation: "dynamic"
    ip_configuration.3587266797.public_ip_address_id: ""
    ip_configuration.3587266797.subnet_id: "/subscriptions/f0dc697c-673c-4fb0-8852-7f47533b4dd6/resourceGroups/slytest/providers/Microsoft.Network/virtualNetworks/slyvnet/subnets/slyvnetsub"
    location: "eastus2"
    mac_address: ""
    name: "slyni1"
    network_security_group_id: ""
    private_ip_address: ""
    resource_group_name: "slytest"
    tags.%: ""
    virtual_machine_id: ""

  • azurerm_storage_account.test.1
    access_tier: ""
    account_kind: "Storage"
    account_type: "Standard_LRS"
    location: "eastus2"
    name: "slysa1"
    primary_access_key: ""
    primary_blob_endpoint: ""
    primary_file_endpoint: ""
    primary_location: ""
    primary_queue_endpoint: ""
    primary_table_endpoint: ""
    resource_group_name: "slytest"
    secondary_access_key: ""
    secondary_blob_endpoint: ""
    secondary_location: ""
    secondary_queue_endpoint: ""
    secondary_table_endpoint: ""
    tags.%: ""

  • azurerm_storage_container.test.1
    container_access_type: "private"
    name: "vhds"
    properties.%: ""
    resource_group_name: "slytest"
    storage_account_name: "slysa1"

  • azurerm_virtual_machine.test.1
    availability_set_id: ""
    delete_data_disks_on_termination: "false"
    delete_os_disk_on_termination: "false"
    license_type: ""
    location: "eastus2"
    name: "slyvm1"
    network_interface_ids.#: ""
    os_profile.#: "1"
    os_profile.1736693719.admin_password: "Password1234!"
    os_profile.1736693719.admin_username: "testadmin"
    os_profile.1736693719.computer_name: "hostname1"
    os_profile.1736693719.custom_data: ""
    os_profile_linux_config.#: "1"
    os_profile_linux_config.2972667452.disable_password_authentication: "false"
    os_profile_linux_config.2972667452.ssh_keys.#: "0"
    resource_group_name: "slytest"
    storage_image_reference.#: "1"
    storage_image_reference.1807630748.offer: "UbuntuServer"
    storage_image_reference.1807630748.publisher: "Canonical"
    storage_image_reference.1807630748.sku: "14.04.2-LTS"
    storage_image_reference.1807630748.version: "latest"
    storage_os_disk.#: "1"
    storage_os_disk.~4275591411.caching: "ReadWrite"
    storage_os_disk.~4275591411.create_option: "FromImage"
    storage_os_disk.~4275591411.disk_size_gb: ""
    storage_os_disk.~4275591411.image_uri: ""
    storage_os_disk.~4275591411.managed_disk_id: ""
    storage_os_disk.~4275591411.managed_disk_type: ""
    storage_os_disk.~4275591411.name: "myosdisk1"
    storage_os_disk.~4275591411.os_type: ""
    storage_os_disk.~4275591411.vhd_uri: "${azurerm_storage_account.test..primary_blob_endpoint[count.index]}${azurerm_storage_container.test..name[count.index]}/myosdisk1.vhd"
    tags.%: ""
    vm_size: "Standard_A0"

Plan: 8 to add, 0 to change, 0 to destroy.
````

When running apply we get a summary that only 4 resources were added

````
azurerm_resource_group.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-7f47533b4dd6/resourceGroups/slytest)
azurerm_virtual_network.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...rosoft.Network/virtualNetworks/slyvnet)
azurerm_storage_account.test.0: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...crosoft.Storage/storageAccounts/slysa0)
azurerm_subnet.test: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...ualNetworks/slyvnet/subnets/slyvnetsub)
azurerm_network_interface.test.0: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...osoft.Network/networkInterfaces/slyni0)
azurerm_storage_container.test.0: Refreshing state... (ID: vhds)
azurerm_virtual_machine.test.0: Refreshing state... (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...crosoft.Compute/virtualMachines/slyvm0)
azurerm_storage_account.test.1: Creating...
access_tier: "" => ""
account_kind: "" => "Storage"
account_type: "" => "Standard_LRS"
location: "" => "eastus2"
name: "" => "slysa1"
primary_access_key: "" => ""
primary_blob_endpoint: "" => ""
primary_file_endpoint: "" => ""
primary_location: "" => ""
primary_queue_endpoint: "" => ""
primary_table_endpoint: "" => ""
resource_group_name: "" => "slytest"
secondary_access_key: "" => ""
secondary_blob_endpoint: "" => ""
secondary_location: "" => ""
secondary_queue_endpoint: "" => ""
secondary_table_endpoint: "" => ""
tags.%: "" => ""
azurerm_network_interface.test.1: Creating...
applied_dns_servers.#: "" => ""
dns_servers.#: "" => ""
enable_ip_forwarding: "" => "false"
internal_dns_name_label: "" => ""
internal_fqdn: "" => ""
ip_configuration.#: "" => "1"
ip_configuration.3587266797.load_balancer_backend_address_pools_ids.#: "" => ""
ip_configuration.3587266797.load_balancer_inbound_nat_rules_ids.#: "" => ""
ip_configuration.3587266797.name: "" => "testconfiguration1"
ip_configuration.3587266797.private_ip_address: "" => ""
ip_configuration.3587266797.private_ip_address_allocation: "" => "dynamic"
ip_configuration.3587266797.public_ip_address_id: "" => ""
ip_configuration.3587266797.subnet_id: "" => "/subscriptions/f0dc697c-673c-4fb0-8852-7f47533b4dd6/resourceGroups/slytest/providers/Microsoft.Network/virtualNetworks/slyvnet/subnets/slyvnetsub"
location: "" => "eastus2"
mac_address: "" => ""
name: "" => "slyni1"
network_security_group_id: "" => ""
private_ip_address: "" => ""
resource_group_name: "" => "slytest"
tags.%: "" => ""
virtual_machine_id: "" => ""
azurerm_network_interface.test.1: Creation complete (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...osoft.Network/networkInterfaces/slyni1)
azurerm_storage_account.test.1: Still creating... (10s elapsed)
azurerm_storage_account.test.1: Creation complete (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...crosoft.Storage/storageAccounts/slysa1)
azurerm_storage_container.test.1: Creating...
container_access_type: "" => "private"
name: "" => "vhds"
properties.%: "" => ""
resource_group_name: "" => "slytest"
storage_account_name: "" => "slysa1"
azurerm_storage_container.test.1: Creation complete (ID: vhds)
azurerm_virtual_machine.test.1: Creating...
availability_set_id: "" => ""
delete_data_disks_on_termination: "" => "false"
delete_os_disk_on_termination: "" => "false"
license_type: "" => ""
location: "" => "eastus2"
name: "" => "slyvm1"
network_interface_ids.#: "" => "1"
network_interface_ids.2697671239: "" => "/subscriptions/f0dc697c-673c-4fb0-8852-7f47533b4dd6/resourceGroups/slytest/providers/Microsoft.Network/networkInterfaces/slyni1"
os_profile.#: "" => "1"
os_profile.1736693719.admin_password: "" => "Password1234!"
os_profile.1736693719.admin_username: "" => "testadmin"
os_profile.1736693719.computer_name: "" => "hostname1"
os_profile.1736693719.custom_data: "" => ""
os_profile_linux_config.#: "" => "1"
os_profile_linux_config.2972667452.disable_password_authentication: "" => "false"
os_profile_linux_config.2972667452.ssh_keys.#: "" => "0"
resource_group_name: "" => "slytest"
storage_image_reference.#: "" => "1"
storage_image_reference.1807630748.offer: "" => "UbuntuServer"
storage_image_reference.1807630748.publisher: "" => "Canonical"
storage_image_reference.1807630748.sku: "" => "14.04.2-LTS"
storage_image_reference.1807630748.version: "" => "latest"
storage_os_disk.#: "" => "1"
storage_os_disk.1098508778.caching: "" => "ReadWrite"
storage_os_disk.1098508778.create_option: "" => "FromImage"
storage_os_disk.1098508778.disk_size_gb: "" => ""
storage_os_disk.1098508778.image_uri: "" => ""
storage_os_disk.1098508778.managed_disk_id: "" => ""
storage_os_disk.1098508778.managed_disk_type: "" => ""
storage_os_disk.1098508778.name: "" => "myosdisk1"
storage_os_disk.1098508778.os_type: "" => ""
storage_os_disk.1098508778.vhd_uri: "" => "https://slysa1.blob.core.windows.net/vhds/myosdisk1.vhd"
tags.%: "" => ""
vm_size: "" => "Standard_A0"
azurerm_virtual_machine.test.1: Still creating... (10s elapsed)
azurerm_virtual_machine.test.1: Still creating... (20s elapsed)
azurerm_virtual_machine.test.1: Still creating... (30s elapsed)
azurerm_virtual_machine.test.1: Still creating... (40s elapsed)
azurerm_virtual_machine.test.1: Still creating... (50s elapsed)
azurerm_virtual_machine.test.1: Still creating... (1m0s elapsed)
azurerm_virtual_machine.test.1: Still creating... (1m10s elapsed)
azurerm_virtual_machine.test.1: Still creating... (1m20s elapsed)
azurerm_virtual_machine.test.1: Still creating... (1m30s elapsed)
azurerm_virtual_machine.test.1: Still creating... (1m40s elapsed)
azurerm_virtual_machine.test.1: Still creating... (1m50s elapsed)
azurerm_virtual_machine.test.1: Still creating... (2m0s elapsed)
azurerm_virtual_machine.test.1: Creation complete (ID: /subscriptions/f0dc697c-673c-4fb0-8852-...crosoft.Compute/virtualMachines/slyvm1)

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

````

Hey @djsly

Apologies for the delayed response on this - I'd missed we hadn't replied when it got migrated across to the new repository!

the plan command shows 4 resources to be added BUT the summary tells us 8

This was a separate bug which has been fixed in Terraform 0.10 - would you be able to upgrade and take a look? :)

Thanks!

Hi Tom. I'm currently on PTO but if you say that the summary output was fixed feel free to close this ticket :) as I did confirmed that the original issue was indeed fixed.

On Aug 15, 2017, at 7:25 AM, Tom Harvey notifications@github.com wrote:

Hey @djsly

Apologies for the delayed response on this - I'd missed we hadn't replied when it got migrated across to the new repository!

the plan command shows 4 resources to be added BUT the summary tells us 8

This was a separate bug which has been fixed in Terraform 0.10 - would you be able to upgrade and take a look? :)

Thanks!


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

shanmugakarna picture shanmugakarna  ·  3Comments

rjinski picture rjinski  ·  3Comments

rkulagowski picture rkulagowski  ·  3Comments

darron picture darron  ·  3Comments

franklinwise picture franklinwise  ·  3Comments