packer 1.3.x "bios.hddorder" in vmx causing trouble with ovftool

Created on 20 Sep 2018  ·  56Comments  ·  Source: hashicorp/packer

Hi,

I've been using packer successfully up until version 1.2.x.
Since packer version 1.3.0 there appears to be a change causing issues in my environment.

Host platforms on which I run packer:
macOS (10.12.6): VMware Fusion Professional Version 8.5.10 (7527438)
Ubuntu 16.04.2 LTS (4.4.0-62-generic): VMware Workstation Pro 14.1.1.7528167

So I build my virtual machines using the vmware-iso packer builder on those systems.
Once completed, I deploy them to a vCenter/vSphere/ESXi environment via ovftool:
ovftool --name="my-machine" --datastore="myDataStore" myVirtualMachine.vmx "vi://[email protected]@vcenter-01.domain.local/Datacenter/host/Cluster"

This would then usually proceed and display something like:

Opening VMX source: /path/myVirtualMachine.vmx
Opening VI target: vi://user%[email protected]:443/Datacenter/host/Cluster
Deploying to VI: vi://user%[email protected]:443/Datacenter/host/Cluster

Disk progress: 1%
Disk progress: 2%
Disk progress: 3%
[... shortened ...]
Disk progress: 98%
Disk progress: 99%
Transfer Completed                    
Completed successfully

The machine is then present in the remote environment and I can boot it up successfully.

However, with a packer-1.3.x built virtual machine, the upload process appears to be much faster, it completes right away. When attempting to boot up the virtual machine, it just attempts a network boot and goes on with displaying: "_Operating System not found_".

I compared the VMX file of a packer-1.2.x built VM with one created using packer-1.3.x and noticed that the latter one contains the following parameter:
bios.hddorder = "scsi0:0"

After removing this parameter, everything was working as expected again for the packer-1.3.x built VM.

I can work around this by specifying the following in my packer template, leading to the parameter being removed after build:

"vmx_data_post": {
   "bios.hddorder": ""
}

Checking some of the packer code/history, I can see that the parameter has been added to the vmx template not too long ago in context of #6197 and https://github.com/hashicorp/packer/pull/6204

Any idea why this is causing ovftool to behave strangely and not upload the disk successfully?
The resulting disk in my datastore is about 36MB while the original size was 1.5GB. No error thrown by ovftool though.

Thanks!

bug buildevmware

Most helpful comment

I've actually reached out to VMWare via HashiCorp's Partner Alliances team; I'll let you know if/when I get an update.

All 56 comments

This also appears to be causing an issue when attempting to use a vmware template created using the vsphere and vsphere-template post processors.

How strange. @arizvisa do you have any idea what could be causing this?

in terms of the upload performance for ovftool, not a clue. i literally thought that it was just doing a POST with its data to upload the files...

i did a (very) quick search through ovftool's docs and didn't find any references to the boot order exactly. however, i ran strings on ovftool.exe and grepped it for bios.bootorder...and it looks like there's actually a reference to it. I can't imagine what it would do different based on this though...I can pull it into a disassembler to try and reverse what it's actually doing in a bit.

is this specific to just ovftool, like ovftool isn't uploading the artifacts completely despite it claiming that it does? Or, is packer incorrectly specifying a bios boot order of scsi when the drive is ide/sata/nvme/something-else and that's what is causing the issue?

I don’t believe that there is better performance per se but that not everything is being uploaded. I can verify this tomorrow morning.

@jcsmith, cool. would appreciate it as i'm just a contributor not really a full-time dever or anything.

also to clarify, the disk that's being created for the .vmx is definitely scsi, right? so bootorder isn't completely wrong?

my methodology for narrowing down this problem is pretty much going to be:

  1. distinguish whether the artifacts are being uploaded correctly
    2a. if they're not, figure out how "bios.bootorder" affects ovftool's upload (through reversing), and then patch code (if necessary)
    2b. if they are, verify that bios.bootorder is actually correct or not
  2. if bootorder is incorrect, then go into developer mode, and figure out some elegant way to get this option's syntax right for both vmware-* _and_ ovftool

(omfg, markdown is just soo stupid. that numbering should be "1", "2a", "2b", then "3".)

@arizvisa I am experiencing the exact behavior that the @as-dg is experiencing. When packer uses the ovftool to upload to vSphere it appears that it uploads the VM fine albeit extremely fast but when you go to view the files on the datastore you'll find that the size of the VMDK file is basically 0. Setting bios.hddorder to be an empty string via the vmx_data_post setting appears to work around the issue.

ok. from a few minutes of reversing it yesterday, it looks like ovftool is only passing that parameter via soap (https://www.vmware.com/support/developer/converter-sdk/conv61_apireference/vim.vm.BootOptions.html) So, it's not doing anything special short of building the request and sending it.

Something that might be helpful is to see the result of that particular SOAP request (non-encrypted, or if it's encrypted because of SSL, i'd need the private key to decrypt). Specifically VirtualMachineBootOptions. Maybe another way could be to verify in the management configuration for the VM whether the boot order was actually set or not. Since I don't have an ESX instance available to me, someone else would have to do this.

Although this pretty much means I'm debugging ovftool's interaction with esx since the only thing that packer's developers can do is really revert my patch that was suggested by the original reporter (which means his bug comes back), or not (which means this issue with ovftool still happens without the workaround).

oh wait, it looks like ovftool has some debugging options: https://www.virtuallyghetto.com/2013/08/quick-tip-useful-ovftool-debugging.html

$ ovftool --help debug

One of these has a logfile, that'll probably log all the requests that ovftool is making.

When one of you guys uses ovftool to upload it to ESX, can you pass the following options to make a verbose log file? I imagine ovftool's developers have enough info in the tools' logs to help troubleshoot this.

$ ovftool -X:logLevel=verbose -X:logFile=/path/to/output/file ...

@as-dg is bootOrder set in your vmx file?

Yes, I believe the bootOrder parameter was set as well. I was looking at both bootOrder and bios.hddOrder and found the latter to be causing this issue. I had also run ovftool in debug mode initially, but nothing useful (at least in my view) turned out of it. However, I can run through this today again and provide the output, maybe I missed something.

@arizvisa I believe there's some confusion between bios.bootorder and the actual problematic bios.hddorder

Let me come back to you in a few hours with debug output.

@arizvisa Alright, here is the log output of ovftool. I hope there is something useful in there. Note that I have anonymized all kinds of id's, tokens, thumbprints etc.
upload.log

I can confirm this issue and the workaround provided bei @as-dg. If there anything I can add for help, please drop me a line.

Here's a log from using chained post-processors for vSphere templates.

Here's the config.

  "post-processors" : [
    {
      "type" : "vsphere",
      "host" : "{{user `vsphere_host`}}",
      "username" : "{{user `vsphere_username`}}",
      "password" : "{{user `vsphere_password`}}",

      "datacenter" : "{{user `vsphere_datacenter`}}",
      "cluster" : "{{user `vsphere_cluster`}}",
      "datastore" : "{{user `vsphere_datastore`}}",

      "vm_name" : "{{user `name`}}",
      "disk_mode" : "{{user `vsphere_disk_mode`}}",
      "insecure" : "{{user `vsphere_insecure`}}",
      "resource_pool" : "{{user `vsphere_resource_pool`}}",
      "vm_folder" : "{{user `vsphere_vm_folder`}}",
      "vm_network" : "{{user `vsphere_vm_network`}}",

      "overwrite" : true
    },

    {
      "type" : "vsphere-template",
      "host" : "{{user `vsphere_host`}}",
      "username" : "{{user `vsphere_username`}}",
      "password" : "{{user `vsphere_password`}}",
      "datacenter" : "{{user `vsphere_datacenter`}}",
      "folder" : "/{{user `vsphere_vm_folder`}}",
      "insecure" : "{{user `vsphere_insecure`}}",

      "keep_input_artifact": true
    }
  ]

2018/11/09 12:52:48 packer: 2018/11/09 12:52:48 Executing: /Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager -d builds/native/atomic-host-7-2018-11-09.vmware/disk.vproject
  Defragment: 100%!d(MISSING)one.11/09 12:52:54 stdout: Defragment: 0%!d(MISSING)one.
2018/11/09 12:52:54 packer: Defragmentation completed successfully.
2018/11/09 12:52:54 packer: 2018/11/09 12:52:54 stderr:
2018/11/09 12:52:54 packer: 2018/11/09 12:52:54 Executing: /Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager -k builds/native/atomic-host-7-2018-11-09.vmware/disk.vproject
  Shrink: 100%!d(MISSING)one.018/11/09 12:52:59 stdout: Shrink: 0%!d(MISSING)one.
2018/11/09 12:52:59 packer: Shrink completed successfully.
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 stderr:
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Setting VMX: 'bios.hddorder' = ''
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Writing VMX to: builds/native/atomic-host-7-2018-11-09.vmware/atomic-host-7.vmx
==> vmware-iso: Cleaning VMX prior to finishing up...
    vmware-iso: Unmounting floppy from VMX...
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Deleting key: floppy0.present
    vmware-iso: Detaching ISO from CD-ROM device...
    vmware-iso: Disabling VNC server...
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Writing VMX to: builds/native/atomic-host-7-2018-11-09.vmware/atomic-host-7.vmx
==> vmware-iso: Skipping export of virtual machine (export is allowed only for ESXi)...
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Executing: /Applications/VMware Fusion.app/Contents/Library/vmrun -T fusion list
2018/11/09 12:53:00 packer: 2018/11/09 12:53:00 stdout: Total running VMs: 0
2018/11/09 12:53:00 packer: 2018/11/09 12:53:00 stderr:
2018/11/09 12:53:00 [INFO] (telemetry) ending vmware-iso
2018/11/09 12:53:00 [INFO] (telemetry) Starting post-processor vsphere
==> vmware-iso: Running post-processor: vsphere
    vmware-iso (vsphere): Uploading builds/native/atomic-host-7-2018-11-09.vmware/atomic-host-7.vmx to vSphere
2018/11/09 12:53:00 packer: 2018/11/09 12:53:00 Starting ovftool with parameters: --acceptAllEulas --name=atomic-host-7 --datastore=storage --noSSLVerify=true --diskMode=thin --vmFolder=proj/RP/Templates --network=example --overwrite builds/native/atomic-host-7-2018-11-09.vmware/atomic-host-7.vmx vi://vc-project-platform:<password>@vcenter.example.com/dtm-dc01/host/dtm-dc01-proj/Resources/RP
    vmware-iso (vsphere):
2018/11/09 12:53:42 [INFO] (telemetry) ending vsphere
2018/11/09 12:53:42 [INFO] (telemetry) Starting post-processor vsphere-template
==> vmware-iso: Running post-processor: vsphere-template
2018/11/09 12:53:42 [INFO] (telemetry) ending vsphere-template
2018/11/09 12:53:42 Deleting original artifact for build 'vmware-iso'
2018/11/09 12:53:42 ui error: Build 'vmware-iso' errored: 1 error(s) occurred:

* Post-processor failed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement
2018/11/09 12:53:42 Builds completed. Waiting on interrupt barrier...
2018/11/09 12:53:42 machine readable: error-count []string{"1"}
2018/11/09 12:53:42 ui error: 
==> Some builds didn't complete successfully and had errors:
2018/11/09 12:53:42 machine readable: vmware-iso,error []string{"1 error(s) occurred:\n\n* Post-processor failed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement"}
2018/11/09 12:53:42 ui error: --> vmware-iso: 1 error(s) occurred:

* Post-processor fBuild 'vmware-iso' errored: 1 error(s) occurred:
ailed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement
==> Builds finished but no artifacts were created.

2018/11/09 12:53:42 [INFO] (telemetry) Finalizing.
* Post-processor failed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement

==> Some builds didn't complete successfully and had errors:
--> vmware-iso: 1 error(s) occurred:

* Post-processor failed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement

==> Builds finished but no artifacts were created.
2018/11/09 12:53:43 waiting for all plugin processes to complete...
2018/11/09 12:53:43 /usr/local/bin/packer: plugin process exited
2018/11/09 12:53:43 /usr/local/bin/packer: plugin process exited
2018/11/09 12:53:43 /usr/local/bin/packer: plugin process exited

Bump. Did we ever get a fix for this or just the work around only? I have also ran into the same issue with packer 1.3.2.

Just the workaround for now; I've not had a chance to deeply investigate this.

I've just come across the same problem, spent quite a while trying to figure it out...
It'd be great to find a proper fix

Essentially the problem was narrowed down to not being at all related to Packer, but rather in ESX (or Ovftool) when using Ovftool to upload a .vmx with a particular option.

This happens only when the bios.hddorder parameter is in your .vmx. The setting of this option was introduced into packer due to a user having an issue with their VM booting up off of the correct hard disk (rather than the cdrom device).

This thread resolved it to being 100% related to Ovftool and as such was able to reproduce it outside of packer. So, because of this I'd pretty much consider it an issue for VMware/Ovftool and worth contacting them about.

The only "fix" that Packer can do (in the meantime) is to avoid assigning "bios.hddorder" entirely, which then introduces the other issue of the VM not choosing the correct hard disk to boot up off of when uploading. So that means the decision is mutually exclusive..

  • Keep the assignment of hddorder which tells VMware the correct hdd boot order so that you can boot up off of the correct hd
  • Remove assignment of hddorder which lets Ovftool upload, but then the VM probably won't boot according to #6197.

So again, it'd be worthwhile for somebody with a support contract to contact VMware and say something like "I can't use ovftool to upload this particular VM", and then send them the .vmx to see why a particular .vmx option (that's in their docs) results in Ovftool sending out a soap request that returns a 200 but doesn't actually upload anything.

@arizvisa -- Agree with your interpretation and approach. I am going to reach out to vmware support for assistance with this issue on my end and report back any useful info they may provide (if any)

I have been struggling with this bug for 7 days and had narrowed it to ovftool issues on friday, but had not yet gotten to vmx adjustments to fix the issue and happened upon this issue this evening.

Interestingly, I am not using the same vmx_data option ID'd as the cause here (bios.hddorder); meaning there may be a number (< 1) of options that cause this behavior. Nevermind, I see this now in the VMX... sorry about that, I'll still do the case with vmware :)

Relevant snippets from my template:

         "vmx_data": {
          "annotation":  "Plan: {{ user `Plan`}} - #{{ user `Build` }}, Build Timestamp: {{ user `Timestamp` }}, Build Agent: {{ user `Builder` }}",
          "RemoteDisplay.vnc.enabled": "false",
          "RemoteDisplay.vnc.port": "5900",
          "memsize": "{{user `memory_size`}}",
          "numvcpus": "{{user `cpus`}}",
          "scsi0.virtualDev": "lsisas1068",
          "ethernet0.virtualDev": "vmxnet3",
          "vcpu.hotadd": "TRUE",
          "mem.hotadd": "TRUE",
          "virtualHW.version": "11"
        } 
    "post-processors": [
        [ {
        "type": "vsphere",
        "host": "{{ user `vcenter_host` }}",
        "insecure": true,
        "datacenter": "{{ user `vcenter_datacenter` }}",
        "datastore": "{{ user `datastore` }}",
        "disk_mode": "{{ user `disk_mode` }}",
        "cluster": "{{ user `cluster` }}",
        "username": "{{ user `vcenter_user` }}",
        "password": "{{ user `vcenter_pw` }}",
        "vm_name" : "packer-win2012r2-datacenter",
        "vm_folder" : "{{ user `vm_folder` }}",
        "vm_network" : "{{ user `vm_network` }}",
        "overwrite": true
    },

I cant easily paste in the packer debug log, but suffice to say from the above, the default ovftool arguments are passed and the vsphere post-processor takes roughly 5s for what should be a ~60GB upload. I will report back once I've identified which of the offending vmx_data params is causing this behavior as well.

I've actually reached out to VMWare via HashiCorp's Partner Alliances team; I'll let you know if/when I get an update.

@paullschock, just a heads up:

You can use gist.github.com to paste large files and such, but also some stuff in your vmx_data can use options available in the vmware builders:

  • Your disk adapter type, "scsi0.virtualDev", can be set with the "disk_adapter_type" option.
  • Your network adapter, "ethernet0.virtualDev", has a "network_adapter_type" option.
  • The hardware config, "virtualHW.version", can be set with "version".

In the next release of packer, you'll also be able to set "memsize" and "numvcpus" with "memory" and "cpus" (respectively). If you feel things like "vcpu.hotadd" and "mem.hotadd" should be an option as well, definitely let any of us know with an issue on those.

Just saying this now since there's been some churn in packer's repo related to warning users against the usage of vmx_data.

@SwampDragons Thank you, I will hold-off on my effort then. If needed, I am happy to pull whatever info you may find useful.

@arizvisa Thank you, I will open an issue re: 'hotadd' options as we've found that setting useful in some narrow operational cases or incident response efforts.

I have been experiencing the same issue, with a slight twist: we use ovftool to convert the result to an OVF, so there's a slightly different trigger. Uploading this manifests the same issue.

The offending segment is converted to the following in the OVF:

    <vmw:BootOrderSection vmw:instanceId="6" vmw:type="disk">
      <Info>Virtual hardware device boot order</Info>
    </vmw:BootOrderSection>

Others have experienced this issue, working around it by removing the segment from the OVF file, but this is sub-optimal.

The bios.hddorder workaround above means that the generated OVF no longer has this snippet. Hopefully posting this here makes this issue easier to find for others.

@SwampDragons thanks for reaching out to the VMWare team, that's fantastic!

@arizvisa This is probably unrelated to this particular issue because for this to be a regression the people reporting this issue aren't setting disk adapter and are defaulting to lsilogic/scsi disks.

BUT: looking at the code versus the documentation, I'm confused about the adapter types we're using.

We use disk adapter type in the '-a' command when creating disks.
looking at docs for desktop:
-a [ide|buslogic|lsilogic]
and docs for esxi:
-a --adaptertype [buslogic|lsilogic|ide|lsisas|pvscsi]

But in our code, we check for "scsi", "nvme", "sata", and "ide". The default behavior should be the same but I'm confused about why the only overlap I'm seeing here is for "ide"

I'm thinking it may make sense to revert #6204 until this is resolved; from what I can tell, it's causing problems for more users than the old behavior was.

@SwampDragons, confused about the adapter types? or the bus types, rather?

Options like "scsi", "nvme", "sata", "ide" are all bus types (or vmware calls them disk types). Whereas "ide", "buslogic", "lsilogic", "lsisas", "pvscsi" are all adapter types and really more like protocols. The "IDE" adapter type is probably in all actuality ATA (or something), and they might've just slipped up on the name.

Did that help? or did I miss your question..

Sure, I'm totally fine with reverting that patch. It was essentially just an implementation of the original reporter's solution to the problem. Despite me being the committer, it's really his patch tbh.

Oh one thing we'll need to retain from it is the bugfix in step_clean_vmx.go, I'll create a PR for that fix right now for whenever you revert #6204.

@SwampDragons, PR #7066 gets rid of that regex hack that I mentioned by adding a list that the builder can add temporary devices to in order to remove them properly during step_clean_vmx.go. So when you revert #6204, the temporary cdrom and floppy devices that are temporary added during build are hopefully removed properly during clean regardless of their type.

As mentioned in the PR, it has support for other device types too but as of now only the "cdrom" and "floppy" devices are added.

I'm looking here: https://github.com/hashicorp/packer/blob/master/builder/vmware/iso/step_create_vmx.go#L186-L216

We're doing a switch statement using DiskAdapterType but comparing it to values of bus types. Rather than comparing them to the actual options, "ide", "buslogic", and "lsilogic". Why would the DiskAdapterType ever be one of those bus types in the way this code is executed?

Okay, the code that caused this issue is reverted. I'm going to close this and make a new issue to reconcile this problem with #6197 moving forward. We need to find something that will work for everyone.

@SwampDragons, Ok. I see what you're saying.

So this might be weird naming perhaps due to the way we've been talking about it lately, but specifying the "disk adapter type" is actually only applicable towards the SCSI device type. SCSI devices are the only disk types that support different APIs to access them. (Way back when, every hard disk manufacturer implement SCSI in their own particular ways which is why you needed SCSI drivers in some occasions). However all of the other bus types, (IDE, SATA, and NVME), have their own standard protocol/api (IDE uses ATAPI, etc) to access them.

So with regards to the DiskAdapterType as a Packer configuration, a user can specify any of these standard disk adapter types. However if an unknown one is specified, then it's assumed that the user is specifying the SCSI protocol (or really the SCSI disk controller type). This is because "scsi" is the only type in VMware that actually supports customizing the disk controller type.

It's prudent to note that if "scsi" is specified as the disk adapter type, the logic defaults to "lsilogic" as the controller type (similar to VMware's defaults). So in all actuality the "scsi" option is really more like an alias to "lsilogic". This is why the "scsi" case appears so minimalistic before it falls through to the default case. In the logic, all of this SCSI stuff is handled by that default case and so if the user specifies anything other than "ide", "nvme", or "sata" then we assume that the user is specifying a SCSI bus with a specific SCSI protocol/disk-controller.

So in other words, DiskAdapterType is actually used to specify the bus type. If anything else is specified, then it's assumed that the user is specifying "scsi" with that particular disk controller since "scsi" is the only one that VMware needs to know the controller type for.

Sorry for the long read, hopefully it makes sense.

I see, so it's a naming issue; I was just finding it confusing that we are using the name DiskAdapterType when we're using it in a way that isn't directly related to the-a disk adapter type option in vmware.

Got a response from VMWare:

The team believes they may have already fixed this bug in their 4.3 release, available at https://my.vmware.com/web/vmware/details?downloadGroup=OVFTOOL430&productId=742
Can you try the updated version and see how it goes? If not I will connect you with them directly.

cc @jcsmith @jamestelfer @pgrinstead1 @paullschock ☝️ Are any of you able to confirm that your issues are resolved with ovftool 4.3?

Ah sweet. So they recognized it and fixed it. Thanks for following up on that @SwampDragons.

@SwampDragons thanks for the follow up!

That's one of the versions of ovftool that I'm using, and it displays the issue. I'm also using a later version (4.3.0 Update1) and it has the issue too.

Bummer. I'll let them know.

Turns out that the update with the ovftool bugfix only works if you're also on esxi 6.7, which I assume isn't the case here.

@SwampDragons confirmed same as @jamestelfer -- I tried v 4.1 - 4.3 without success. Talking to an ESXi 6 u3 backend (vcenter 6.0).

I'm using 5.5 and 6.

@SwampDragons Can confirm that ovftool 4.3 (Running VMware Fusion 11.0 on OSX) has the same issue. Contacted VMware support and they acknowledged the issue and a ticket is open with their product team.

Okay, so this does sound like the issue is that none of us are on the version of esxi (6.7) that was released with ovftool 4.3. The folks at VMWare said you need to be using 6.7 for this bugfix to work.

Obviously that's not going to work for our users, so we have reverted the change that broke things and we'll have to figure out a different solution for the issue we were trying to fix with that change. Can y'all confirm that your uploads are working again with Packer v1.3.3?

Uploading to ESXi6.5 with Packer v1.3.3 still seemed to present the same issues.

This sounds like it's an issue with ESX and not with ovftool.

This would make sense because when I reversed that component ovftool, it wasn't doing anything special but building a soap request with the information of this option in the .vmx. So it's not like ovftool was doing any processing or whatever.

Maybe it should be asked if they'll backport the fix?

(edited to add information about my analysis of ovftool).

@jcsmith what version of Packer _does_ still work for you? I'm starting to wonder if your issue is different.

1.2.5 definitely works. But I’m pretty sure anything <1.3.0 works.

Thanks,

Josh Smith

Sent from my iPhone.

On Dec 12, 2018, at 7:03 PM, Megan Marsh notifications@github.com wrote:

@jcsmith what version of Packer does still work for you? I'm starting to wonder if your issue is different.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Oh! I thought the revert had made it into the 1.3.3 release but it didn't. Here are some builds of a patch (#7108) that should actually fix this, and if it does work I'll schedule it for v1.3.4.

windows:
packer.zip

osx:
packer.zip

linux:
packer.zip

Testing with the OSX build linked to above now. I'll provide an update as soon as the build finishes.

These builds function as expected.

Okay, thanks. I'll merge this revert so that the patch is in v1.3.4. It'll be released in ~6 weeks

I have confirmed this works for me as well and I appreciate the fix. It would be nice to not have to wait ~6 weeks to cut a release.

@zpratt Sorry, but we don't expedite releases for bugfixes unless it's a critical bug or security issue, which I don't think this qualifies as, especially since you have a patched build you can use in the meantime. Also, HashiCorp shuts down for the holidays starting on Friday and doesn't reopen until early January, so cutting a release this week is a Bad Idea from a support perspective. Thanks for your patience though! I know it's frustrating to have to wait for something that ideally never would have broken in the first place.

Appears others are having same issue. I faced the same. I agree with what others have said, the upload definitely works with ESXi 6.7 and does not work with ESXi 6.5 for me. ovftool and PowerCLI Import-vApp produce same behavior, one of my two disks doesn't upload at all--totally ignored by both tools.

https://communities.vmware.com/message/2827714#2827714

I believe I have a different build environment from others that have posted, but I experienced the same issue:
packer 1.3.3 running on Windows 2016, using a remote ESXi 6.5 host for builds.
"type": "vmware-iso",
"disk_type_id": "zeroedthick",
"disk_adapter_type": "lsisas1068",
"remote_type": "esx5",
"format": "ova",
ovftool 4.3.0 Update 1 (same symptom was seen with ovftool 4.2.0)

The resulting OVA appeared to contain the correct vmdk, but the upload from that OVA would create an empty vmdk in the VM -- tried direct to ESXi 6.5 and through vcenter 6.5 to a different 6.5 host.

The windows patch build posted above from #7108 also resolves the issue for me.

Okay.. this issue has been closed some time ago with the solution and testing already done. So, please stop adding to the discussion unless you're trying to re-open the issue...

For the record, ESXi 6.5 is the _only_ version that has issues when receiving a VM that has the bios.hddorder option set. If you're using ESXi 6.7, VMware has already fixed this problem. It is up to them if they want to backport the patch for your ESXi 6.5 instance. If you have a support relationship with them, ask them about it.

So for people stuck with a bugged instance of ESXi 6.5:

  1. If you need to deploy _anything_ to ESXi 6.5, you'll need to modify the .vmx and remove the bios.hddorder assignment. This option is not commonly used but was needed in certain instances where VMware will try and boot up off of the cdrom device (due to the bus type) instead of the hard disk device.

  2. If (and only if) you have this aforementioned cdrom booting issue on ESXi 6.5, upload your VM without bios.hddorder set, and then modify the bootoptions attribute (VirtualMachineConfigInfo) via the web interface or by modifying the VM's config via PowerCLI.

  3. If you don't know how to modify your .vmx (prior to uploading) and absolutely need to do this from within packer but can't wait for the new release, you can likely use vmx_data to set bios.hddorder to an empty string.

  4. If none of these solutions satisfy you and you can't wait for a new release. @SwampDragons included a binary for each of the 3 main platforms in the comments (here). If not good enough, feel free to build your own release of packer that includes PR #7108.

But just so it's clear, the correct fix is to upgrade your instance of ESXi since your ESXi 6.5 instance doesn't support bios.hddorder and every other version of ESXi seems to.

It also appears that ovftool let's you specify properties via --prop which will let you affect the boot options. I personally don't use ovftool for anything as powershell-core and PowerCLI are more powerful for deployment. But I'd recommend looking into this if you're an ovftool fan...

(edited to add mention of the ovftool properties)

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings