v1.2.2, running in docker container built from: golang:1.8.3-alpine3.6
VM will provision and kick off our provisioners. The os disk being created is using osdisk
as the name.
This prevents multiple builds from occurring at the same time within the build_resource_group_name
Azure builder documentation states:
OS Disk Name: a random 15-character name prefixed with pkros.
Quick glance at the code, I don't believe the config.tmpOSDiskName is being used anywhere, which would randomize the disk name.
{
"variables": {
"azr_client_id": "cf...",
"azr_client_secret": "{{env `AZR_CLIENT_SECRET`}}",
"azr_tenant_id": "42...",
"azr_subscription_id": "35...",
"azr_spn_object_id": "764...",
"azr_img_publisher": "MicrosoftWindowsServer",
"azr_img_offer": "WindowsServer",
"azr_img_sku": "2016-Datacenter-smalldisk",
"azr_tmp_rg": "RGP-CDMGMT-PACKER-TEMP-WESTUS2",
"azr_img_name": "img-cdmgmt-windows-2016-datacenter-westus2",
"azr_img_rg": "RGP-CDMGMT-IMAGES-WESTUS2",
"azr_location": "westus2",
"azr_cen": "Public",
"azr_disk_size": "64",
"azr_tmp_build_name": "w16",
"azr_net": "vnt-westus2-cdmgmt-pub",
"azr_net_rg": "RGP-CDMGMT-NET-WESTUS2",
"azr_vm_subnet": "sbt-westus2-cdmgmt-packer",
"azr_vm_size": "Standard_A2_v2",
"azr_username": "packer"
},
"builders": [
{
"type": "azure-arm",
"client_id": "{{user `azr_client_id`}}",
"client_secret": "{{user `azr_client_secret`}}",
"tenant_id": "{{user `azr_tenant_id`}}",
"subscription_id": "{{user `azr_subscription_id`}}",
"image_publisher": "{{user `azr_img_publisher`}}",
"image_offer": "{{user `azr_img_offer`}}",
"image_sku": "{{user `azr_img_sku`}}",
"build_resource_group_name": "{{user `azr_tmp_rg`}}",
"managed_image_name": "{{user `azr_img_name`}}",
"managed_image_resource_group_name": "{{user `azr_img_rg`}}",
"cloud_environment_name": "{{user `azr_cen`}}",
"object_id": "{{user `azr_spn_object_id`}}",
"os_disk_size_gb": "{{user `azr_disk_size`}}",
"os_type": "Windows",
"temp_compute_name": "{{user `azr_tmp_build_name`}}-{{timestamp}}",
"private_virtual_network_with_public_ip": "true",
"virtual_network_name": "{{user `azr_net`}}",
"virtual_network_resource_group_name": "{{user `azr_net_rg`}}",
"virtual_network_subnet_name": "{{user `azr_vm_subnet`}}",
"vm_size": "{{user `azr_vm_size`}}",
"communicator": "winrm",
"winrm_username": "{{user `azr_username`}}",
"winrm_insecure": true,
"winrm_use_ssl": true,
"winrm_timeout": "30m",
"azure_tags": {
"originator": "cdmgmt-packer"
}
}
],
"provisioners": [
...
]
}
Resulting ARM template:
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json",
"contentVersion": "1.0.0.0",
"parameters": {
"adminPassword": {
"type": "String"
},
"adminUsername": {
"type": "String"
},
"dnsNameForPublicIP": {
"type": "String"
},
"nicName": {
"type": "String"
},
"osDiskName": {
"type": "String"
},
"publicIPAddressName": {
"type": "String"
},
"storageAccountBlobEndpoint": {
"type": "String"
},
"subnetName": {
"type": "String"
},
"virtualNetworkName": {
"type": "String"
},
"vmName": {
"type": "String"
},
"vmSize": {
"type": "String"
}
},
"variables": {
"addressPrefix": "10.0.0.0/16",
"apiVersion": "2017-03-30",
"location": "[resourceGroup().location]",
"managedDiskApiVersion": "2017-03-30",
"networkInterfacesApiVersion": "2017-04-01",
"publicIPAddressApiVersion": "2017-04-01",
"publicIPAddressType": "Dynamic",
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
"subnetAddressPrefix": "10.0.0.0/24",
"subnetName": "sbt-westus2-cdmgmt-packer",
"subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]",
"virtualNetworkName": "vnt-westus2-cdmgmt-pub",
"virtualNetworkResourceGroup": "RGP-CDMGMT-NET-WESTUS2",
"virtualNetworksApiVersion": "2017-04-01",
"vmStorageAccountContainerName": "images",
"vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]"
},
"resources": [
{
"type": "Microsoft.Network/publicIPAddresses",
"name": "[parameters('publicIPAddressName')]",
"apiVersion": "[variables('publicIPAddressApiVersion')]",
"location": "[variables('location')]",
"tags": {
"originator": "cdmgmt-packer"
},
"properties": {
"dnsSettings": {
"domainNameLabel": "[parameters('dnsNameForPublicIP')]"
},
"publicIPAllocationMethod": "[variables('publicIPAddressType')]"
}
},
{
"type": "Microsoft.Network/networkInterfaces",
"name": "[parameters('nicName')]",
"apiVersion": "[variables('networkInterfacesApiVersion')]",
"location": "[variables('location')]",
"tags": {
"originator": "cdmgmt-packer"
},
"properties": {
"ipConfigurations": [
{
"name": "ipconfig",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]"
},
"subnet": {
"id": "[variables('subnetRef')]"
}
}
}
]
},
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]"
]
},
{
"type": "Microsoft.Compute/virtualMachines",
"name": "[parameters('vmName')]",
"apiVersion": "[variables('apiVersion')]",
"location": "[variables('location')]",
"tags": {
"originator": "cdmgmt-packer"
},
"properties": {
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": false
}
},
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]"
}
]
},
"osProfile": {
"adminPassword": "[parameters('adminPassword')]",
"adminUsername": "[parameters('adminUsername')]",
"computerName": "[parameters('vmName')]",
"secrets": [
{
"sourceVault": {
"id": "[resourceId(resourceGroup().name, 'Microsoft.KeyVault/vaults', 'pkrkv1fk27dv429')]"
},
"vaultCertificates": [
{
"certificateStore": "My",
"certificateUrl": "https://pkrkv1fk27dv429.vault.azure.net/secrets/packerKeyVaultSecret/ed..."
}
]
}
],
"windowsConfiguration": {
"provisionVMAgent": true,
"winRM": {
"listeners": [
{
"certificateUrl": "https://pkrkv1fk27dv429.vault.azure.net/secrets/packerKeyVaultSecret/ed...",
"protocol": "https"
}
]
}
}
},
"storageProfile": {
"imageReference": {
"offer": "WindowsServer",
"publisher": "MicrosoftWindowsServer",
"sku": "2016-Datacenter-smalldisk",
"version": "latest"
},
"osDisk": {
"caching": "ReadWrite",
"createOption": "fromImage",
"diskSizeGB": 64,
"managedDisk": {
"storageAccountType": "Standard_LRS"
},
"name": "osdisk",
"osType": "Windows"
}
}
},
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]"
]
}
]
}
There was a pre-existing issue about this, see #6005. I committed a fix, and test concurrent builds to verify it works. Maybe I missed something...
The value config.tmpOSDiskName when the template is created programmatically, and is referenced in the JSON template.
I just kicked off a build with three concurrent VMs to the same RG, and it worked. I double checked the OS disk names too.
I am running Packer v1.2.2.
Since I do not fully understand how things work, is there a difference between using the parameter in the JSON template vs. the variable? Based on how other random attribute values are done, it would have to be referenced as a variable, not a parameter?
Could you post your packer.json file in your above example? I'm wondering if it's something to do with using a build_resource_group_name
. I tried without this... and a new build resource group is created each time... and does work concurrently.... and each RG has the same disk name osdisk
.
As a follow up, this line has the osdisk
name hard coded into what is generated for the Azure deployment template. I verified the parameters passed into the Azure build will use the proper tmpOSDiskName value.
I think if this "name": "osdisk"
changes to "name": "[parameters('osDiskName')]"
, it would create the disk in the resource group with the random name. Going to try and get a PR going for this.
I think that line is fine, or rather I don't see why a change is necesary. The name should be osDisk for every VM that is deployed. The osDiskName parameter is used when setting the OS disk URL.
Here's a copy of my configuration file.
{
"variables": {
"cid": "your_client_id",
"cst": "your_client_secret",
"tid": "your_client_tenant",
"sid": "your_subscription_id",
"rgn": "your_resource_group",
"sa": "your_storage_account"
},
"builders": [
{
"type": "azure-arm",
"client_id": "{{user `cid`}}",
"client_secret": "{{user `cst`}}",
"subscription_id": "{{user `sid`}}",
"tenant_id": "{{user `tid`}}",
"resource_group_name": "{{user `rgn`}}",
"storage_account": "{{user `sa`}}",
"build_resource_group_name": "keep-resource-group-test00",
"capture_container_name": "images",
"capture_name_prefix": "packer",
"os_type": "linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"vm_size": "Standard_DS1_v2"
}
],
"provisioners": [
{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline": [
"df -ah",
"/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync"
],
"inline_shebang": "/bin/sh -x",
"type": "shell"
}
]
}
So I ran a VHD backed example and concurrent builds as I expected were just fine, since the random .vhd
name is used on the storage account.
I see the difference in our examples now.
In your example, the temporary VM is built with the .vhd
on the storage account, which has the random disk name and things are happy.
"storage_account": "{{user `sa`}}",
"capture_container_name": "images",
"capture_name_prefix": "packer",
----- from the template_builder.
"name": "osdisk", <-- this is never used in the build resource group
"vhd": { <--- using this
"uri": " <[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]"
},
My example is building to a managed image, which spins up the temp VM backed by a virtual disk (not a vhd) in the build resource group using the name osdisk
.
"managed_image_name": "{{user `azr_img_name`}}-v{{timestamp}}",
"managed_image_resource_group_name": "{{user `azr_img_rg`}}",
-----------
"osDisk": {
"name": "osdisk", <-- using this
"vhd": { <-- not using this
"uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]"
},
Since the resource group cannot have duplicate virtual disk names, concurrent builds will fail. First build is fine and the osdisk
is copied into our managed image resource group with it's proper timestamped name.
Even single threaded for multiple builds is an issue. Possibly unrelated, but the osdisk
in the resource group in this scenario doesn't seem to be getting removed post-build. Possibly packer is trying to remove the tmpOSDiskName
and not osdisk
? I haven't gotten to troubleshooting that, but doesn't seem to be permissions, as I can use the same creds as the build on the cli to remove the osdisk
.
It still isn't clear to me, but have you found a workaround for your issue? It sounds like yes, but I could be misreading.
The cleanup should only delete things that were created as a result of the deployment. There may be a bug in the builder for this situation. Would please send a gist with debug output.
Well writing a vhd (to a storage account using storage_account
) concurrent certainly does work; however our business model dictates writing managed_image_name
, which is specifically not working concurrently.
So the specific case we see concurrent builds not working is this one:
"image_publisher": "{{user `azr_img_publisher`}}",
"image_offer": "{{user `azr_img_offer`}}",
"image_sku": "{{user `azr_img_sku`}}",
"build_resource_group_name": "{{user `azr_tmp_rg`}}",
"managed_image_name": "{{user `azr_img_name`}}",
"managed_image_resource_group_name": "{{user `azr_img_rg`}}",
I'll get the error output and post it today.
If the delete issue persists, I'll open another thread for that. I'd rather not muddy this specific case with a possibly different issue. Plus I haven't dug into that deeply enough to rule out an issue on our end.
@boumenot Here is a gist with the debug output: https://gist.github.com/byron70/a037edaf62202c2da63c4a961ab3979e
I'll review the gist.
Are you picking a new managed_image_name each time, or are you re-using the same one? I would expect the former, but that isn't clear to me.
@boumenot In the original example I did have the same name and have since starting using names with timestamps, "managed_image_name": "{{user
azr_img_name}}-v{{timestamp}}",
, thinking maybe that was the issue.
Really appreciate your time looking into this. For now I started on a work-around to use the built-in random build_resource_group_name
and doing a copy post build into the proper managed_image_resource_group_name
where we want the final images to reside.
No problem.
I believe the builder is working as expected. It should only randomize values that the user cannot specify or values that can be defaulted. In the case of managed_image_name
the user is expected to pick the value. I think you have taken the correct approach of randomizing it yourself.
The builder has a random value for VHD builds because Azure does not allow you to influence the value. Azure randomly assigns a value at the time of capture. Managed images allow you to control the name, and the user is expected to pick the value. The VHD behavior works better in this situation, but I think not being able to control the value is worse.
@boumenot not sure why is this closed since the issue is still not resolved?
I have the same problem as @byron70 and randomizing managed_image_name
does not solve it as osdisk name stays the same.
==> azure-arm: ERROR: -> DeploymentFailed : At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
==> azure-arm: ERROR: -> Conflict
==> azure-arm: ERROR: -> ResourceDeploymentFailure : The resource operation completed with terminal provisioning state 'Failed'.
==> azure-arm: ERROR: -> ConflictingUserInput : A disk with name osdisk already exists in Resource Group XXXXXX. 'Name' is an optional property for a disk and a unique name will be generated if not provided.
I can't create new resource group since my user does not have permissions. I have to stick to my resource group.
IMHO error above clearly states that name is provided, but it shouldn't be.
I'm not providing any name in my packer.json, actually, mine is pretty much the same as @byron70 's, but I can't use his workaround.
@boumenot do you think you could randomize the osdisk name? Or just delete it so Azure can randomize it? Otherwise concurrent builds are simply not possible in the same resource group for managed image.
I closed it because it sounds like he had work around, which is to assign his own unique name.
@boumenot In the original example I did have the same name and have since starting using names with timestamps, "managed_image_name": "{{userazr_img_name}}-v{{timestamp}}",, thinking maybe that was the issue.
If this is incorrect, then I'll re-open it.
@cantorek is correct I don't really have a work around by using the random resource group, that ultimately will be problematic for us. I also, only mentioned the managed_image_name with timestamp change, as (unrelated to this issue) might lead to the output managed image writing to fail.
That aside.. this enumerates the exact scenario where it occurs.
So the specific case we see concurrent builds not working is this one:
"image_publisher": "{{user `azr_img_publisher`}}", <-- managed image from the Azure marketplace "image_offer": "{{user `azr_img_offer`}}", "image_sku": "{{user `azr_img_sku`}}", "build_resource_group_name": "{{user `azr_tmp_rg`}}", <-- same name across multiple builds "managed_image_name": "{{user `azr_img_name`}}", <-- doesn't mattter "managed_image_resource_group_name": "{{user `azr_img_rg`}}", <-- doesn't matter
This screenshot shows the artifacts during a concurrent packer build
. The first creates osdisk
, second will fail, since Azure does not allow duplicated named disks in the same resource group.
@boumenot Thanks for reopening this.
I believe you can't name the disk, but I might be wrong here. @byron70 's workaround was to use random resource group name, which I cannot use, and still, even when you do that, problem persists.
I do have the same issue, symptoms and config as @byron70 , except I can only use one resource group, which means - one build at a time :-(
@boumenot could this be as simple as @byron70 suggested by changing "name": "osdisk"
to name": "[parameters('osDiskName')]"
?
I can try and test this tomorrow, but my corp env is somehow limited :-(
I just assumed that Azure would let you name the disk whatever, so I gave it a try just to verify it would be ok.
The following excerpt I used in a new template and works just fine. I wound up with a disk named Vpkrospv84vwsyyf
based on the parameter value I passed into the template when executing the deployment. parameters('osDiskName')
already gets passed by packer to the deployment, it seems that it's just a matter of using it in the deployment template.
"storageProfile": {
"imageReference": {
"offer": "CentOS",
"publisher": "OpenLogic",
"sku": "6.9",
"version": "latest"
},
"osDisk": {
"caching": "ReadWrite",
"createOption": "fromImage",
"diskSizeGB": 32,
"managedDisk": {
"storageAccountType": "Standard_LRS"
},
"name": "[parameters('osDiskName')]",
"osType": "Linux"
}
}
I think that's the right fix. My only (minor) concern would be if there's a dramatic change in the VHD builder. I don't know if the name has to be conditioned on if you're producing a VHD or a managed image.
Please submit a PR.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.