Hi,
I want to create multiple vms in same subnet.( Say 10 vms). But facing issue beause of same "network_interface_ids"
<<
* azurerm_virtual_machine.test.1: compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="NicInUse" Message="Network Interface acctni09 is used by existing VM /subscriptions/xxxx/resourceGroups/acctestrg/providers/Microsoft.Compute/virtualMachines/confignode-01." Details=[]
>>
I have used the follwing config_file. exampl.tf
variable "confignode_count" {default = 2} # Define the number of instances
resource "azurerm_resource_group" "test" {
name = "acctestrg"
location = "West US"
}
resource "azurerm_virtual_network" "test" {
name = "acctvn09"
address_space = ["192.168.0.0/16"]
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
}
resource "azurerm_subnet" "test" {
name = "acctsub09"
resource_group_name = "${azurerm_resource_group.test.name}"
virtual_network_name = "${azurerm_virtual_network.test.name}"
address_prefix = "192.168.1.0/24"
}
resource "azurerm_network_interface" "test" {
name = "acctni09"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
ip_configuration {
name = "testconfiguration1"
subnet_id = "${azurerm_subnet.test.id}"
private_ip_address_allocation = "dynamic"
}
}
resource "azurerm_storage_account" "test" {
name = "accsa09"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "westus"
account_type = "Standard_LRS"
tags {
environment = "staging"
}
}
resource "azurerm_storage_container" "test" {
name = "vhds"
resource_group_name = "${azurerm_resource_group.test.name}"
storage_account_name = "${azurerm_storage_account.test.name}"
container_access_type = "private"
}
resource "azurerm_virtual_machine" "test" {
#name = "acctvm09"
name = "confignode-${format("%02d", count.index+1)}"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
network_interface_ids = ["${azurerm_network_interface.test.id}"]
vm_size = "Standard_A0"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "14.04.2-LTS"
version = "latest"
}
storage_os_disk {
#name = "myosdisk1"
name = "configosdisk-${format("%02d", count.index+1)}"
vhd_uri = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/configosdisk-${format("%02d", count.index+1)}.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "hostnameconfig"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags {
environment = "staging"
}
count = "${var.confignode_count}"
}
With default=1 it is working fine. I have also tried by creating multiple "network-interfce" . But not able to resolve the issue.
Can anyone suggest something?
@vikash009 I will try this today/tomorrow. But I think, that you should create nics also with count and reference them in VM conf based on count as well, not like you have now
network_interface_ids = ["${azurerm_network_interface.test.id}"]
to
network_interface_ids = ["${element(azurerm_network_interface.test.*.id, count.index)}"]
I created nic with count and reference them in VM conf. It is working
fine. But one more issue i am still facing.
This vms are creating in random order. it means IP address assignment is
not happening as per my expectation.
I have also tried to create vms with parallelism=1 option. I have used
following tf.
_terraform apply -parallelism=1_
<<
variable "node_count" {default = 3} # Define the number of instances
resource "azurerm_network_interface" "terraform-CnetFace" {
count = "${var.node_count}"
name = "cacctni-${format("%02d", count.index+1)}"
location = "East US 2"
resource_group_name = "${azurerm_resource_group.terraform-test.name}"
ip_configuration {
name = "cIpconfig-${format("%02d", count.index+1)}"
subnet_id = "${azurerm_subnet.terraform-test.id}"
private_ip_address_allocation = "dynamic"
}
}
resource "azurerm_virtual_machine" "terraform-test" {
count = "${var.node_count}"
name = "node-${format("%02d", count.index+1)}"
location = "East US 2"
resource_group_name = "${azurerm_resource_group.terraform-test.name}"
#network_interface_ids = ["${
azurerm_network_interface.terraform-CnetFace.id}"]
network_interface_ids =
["${element(azurerm_network_interface.terraform-CnetFace.*.id,
count.index)}"]
vm_size = "Standard_A0"
availability_set_id = "${azurerm_availability_set.terraform-test.id}"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "14.04.2-LTS"
version = "latest"
}
Foe example : If we have given subnet "200.200.1.0/24".
_Then IP assignments should be like that:node-01 : 200.200.1.4node-02 :
200.200.1.5node-03 : 200.200.1.6_
But IP assignments are happening in random order. one example.
node-01 : 200.200.1.6
node-02 : 200.200.1.4
node-03 : 200.200.1.5
Can anyone suggest how can i resolve this issue ? Means IP assignment
should be sequential as per my node creation.
On Mon, Nov 14, 2016 at 9:27 PM, Petr Artamonov [email protected]
wrote:
@vikash009 https://github.com/vikash009 I will try this today/tomorrow.
But I think, that you should create nics also with count and reference them
in VM conf based on count as well, not like you have now
network_interface_ids = ["${azurerm_network_interface.test.id}"]
to
network_interface_ids = ["${element(azurerm_network_interface.test.*.id,
count.index)}"]—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/terraform/issues/10044#issuecomment-260375146,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKAnO8lfDMxXtwugZswQoi4qAG9Qou19ks5q-IT_gaJpZM4KvgNF
.
Then you need to have static local ip addresses.
And use cidrhost(iprange, hostnum) interpolation function
IP range will be your subnet value = "200.200.1.0/24" and hostnum will be count.index
Hi @vikash009
I am going to close this out - we usually try and keep github issues for bug requests. Please feel free to use one of the other methods on our community page for ideas how to get your specific configuration working
Thanks
Paul
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@vikash009 I will try this today/tomorrow. But I think, that you should create nics also with count and reference them in VM conf based on count as well, not like you have now
network_interface_ids = ["${azurerm_network_interface.test.id}"]to
network_interface_ids = ["${element(azurerm_network_interface.test.*.id, count.index)}"]