Packer: Azure managed image builder broken in 1.3.3 when snapshots are not enabled

Created on 6 Dec 2018  ยท  19Comments  ยท  Source: hashicorp/packer

Seems like 1.3.3 has a regression here: we have an Azure packer build (using the hashicorp/packer:latest Docker image) that suddenly started failing with the following error message with no changes other than the 1.3.3 upgrade:

==> azure-arm: Taking snapshot of OS disk ...
==> azure-arm: ERROR: -> UnsupportedResourceOperation : The resource type 'snapshots' does not support this operation.

I suspect this may be related to this feature from the 1.3.3 release:

builder/azure: Add options for Managed Image OS Disk and Data Disk snapshots [GH-6980]

However, we're not using the new managed_image_os_disk_snapshot_name / managed_image_data_disk_snapshot_prefix options.

I will work on getting you guys the debug log output and a repro case - just thought you might want to know about this issue sooner rather than later.

bug buildeazure regression

Most helpful comment

All of the updates are on this thread. A fix was merged, see the first Closed notification. According to the issue, the fix is targeted towards Packer 1.3.4, which has not been released yet. You can either wait, or revert to last known good version. I hope that helps.

All 19 comments

@danports - would you please share a gist of your configuration, so we can determine why this is failing.

@amydutta - would you please investigate and fix.

I can't share the full template, but here's a skeleton with all of the important bits: https://gist.github.com/danports/419e4ea77fdb36cc455c0a4358409e1a

For the time being, we reverted our build to 1.3.2 and confirmed that the same template builds successfully on that version but fails on 1.3.3.

I'm seeing this too, after upgrading to v1.3.3. I'm MS-Internal, feel free to reach out on mail/Teams if there's anything else I can help you track down.

Our AzureRM builder configuration:

{
  "type": "azure-arm",

  "tenant_id": "{{user `AzureRM_TenantID`}}",
  "subscription_id": "{{user `AzureRM_SubscriptionID`}}",
  "client_id": "{{user `AzureRM_ClientID`}}",
  "client_secret": "{{user `AzureRM_ClientSecret`}}",

  "virtual_network_resource_group_name": "{{snipped}}",
  "virtual_network_name": "{{snipped}}",
  "virtual_network_subnet_name": "{{snipped}}",
  "private_virtual_network_with_public_ip": "false",
  "build_resource_group_name": "{{snipped}}",

  "managed_image_resource_group_name": "{{snipped}}",
  "managed_image_name": "Generic{{user `ImageNameSuffix`}}",

  "os_type": "Windows",
  "image_publisher": "MicrosoftWindowsServer",
  "image_offer": "WindowsServer",
  "image_sku": "2016-Datacenter",

  "communicator": "winrm",
  "winrm_use_ssl": true,
  "winrm_insecure": true,
  "winrm_timeout": "3m",
  "winrm_username": "packer",

  "vm_size": "Standard_DS2_v2"
}

@boumenot I will take a look first thing on Monday.

Any updates on this? experiencing the same issue as above. Here is the builder config:

"builders": [
    {
    "type": "azure-arm",
    "client_id": "{{user `client_id`}}",
    "client_secret": "{{user `client_secret`}}",
    "tenant_id": "{{user `tenant_id`}}",
    "subscription_id": "{{user `subscription_id`}}",

    "managed_image_resource_group_name": "{{user `managed_image_resource_group_name`}}",
    "managed_image_name": "{{user `managed_image_name`}}",

    "virtual_network_name": "{{user `virtual_network_name`}}",
    "virtual_network_subnet_name": "{{user `virtual_network_subnet_name`}}",
    "virtual_network_resource_group_name": "{{user `virtual_network_resource_group_name`}}",

    "os_type": "Windows",
    "image_publisher": "MicrosoftWindowsServer",
    "image_offer": "WindowsServer",
    "image_sku": "2016-Datacenter",

    "communicator": "winrm",
    "winrm_use_ssl": true,
    "winrm_insecure": true,
    "winrm_timeout": "3m",
    "winrm_username": "packer",

    "azure_tags": {
        "Organization": "{{user `Organization`}}",
        "Environment": "{{user `Environment`}}",
        "Business_Owner": "{{user `Business_Owner`}}",
        "Module": "{{user `Module`}}",
        "Version": "{{user `Version`}}",
        "Line_of_Business": "{{user `Line_of_Business`}}",
        "Developer": "{{user `Developer`}}",
        "Location": "{{user `Location`}}"
    },

    "location": "{{user `location`}}",
    "vm_size": "{{user `vm_size`}}"
    }
],

There is no update at the moment. We recommend you use the previous version of Packer to avoid this issue.

I submitted a fix for the issue. Apologies for all of those that got hit. We'll fix our process, so we can do better in the future.

The snapshot feature was inappropriately enabled if Managed Disks were used whereas they should have been gated on the user enabling snapshots on Managed Disks.

I created a private build for Darwin, Linux, and Windows at https://1drv.ms/f/s!AmGtc0f52KBYgcpny1TFN9Pf-UzW9A if you would like to verify this version.

Just chiming in to say that the private build posted just above worked for me.
I am using this builder config (sensitive info removed of course):

{
    "builders":
    [
        {
            "type": "azure-arm",
            "ssh_pty": true,
            "client_id": "{{user `client_id`}}",
            "client_secret": "{{user `client_secret`}}",
            "tenant_id": "{{user `tenant_id`}}",
            "subscription_id": "{{user `subscription_id`}}",

            "build_resource_group_name": "{{user `build_resource_group_name`}}",
            "managed_image_resource_group_name": "{{user `managed_image_resource_group_name`}}",
            "managed_image_name": "{{user `managed_image_name`}}",

            "os_type": "Linux",
            "image_publisher": "OpenLogic",
            "image_offer": "CentOS",
            "image_sku": "7.2",

            "vm_size": "Standard_DS2_v2"
        }
    ],

    "provisioners":
    [
        {
            "type": "ansible",
            "playbook_file": "../ansible/linux/build.yml"
        },
        {
            "type": "shell",
            "inline_shebang": "/bin/sh -x",
            "execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
            "inline": [
                "/usr/sbin/waagent -force -deprovision && export HISTSIZE=0 && sync"
            ],
            "skip_clean": true
        }
    ]
}

Awesome! Thanks for the confirmation. I've merged the fix.

Thanks for fixing. When do you expect it to be released?

The next release will probably be in mid to late January; we try to have a release cadence of ~6 weeks.

Can confirm that downgrading to 1.3.2 from 1.3.3 is working for me. If you are not using chocolatey or similar installer with optional version input, the 1.3.2 downloads can be found here: https://releases.hashicorp.com/packer/

if you're using brew on osx and had 1.3.2 installed on your local system you can run this ex command too quickly downgrade. brew switch packer /usr/local/Cellar/packer/1.3.2

Same Issue, deal breaker

any updates on this? Thanks guys

All of the updates are on this thread. A fix was merged, see the first Closed notification. According to the issue, the fix is targeted towards Packer 1.3.4, which has not been released yet. You can either wait, or revert to last known good version. I hope that helps.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

frezbo picture frezbo  ยท  3Comments

craigsimon picture craigsimon  ยท  3Comments

shashanksinha89 picture shashanksinha89  ยท  3Comments

jesse-c picture jesse-c  ยท  3Comments

s4mur4i picture s4mur4i  ยท  3Comments