Packer: Packer build for hyperv-iso fails with `Waiting for SSH` error.

Created on 22 Jun 2017  ·  69Comments  ·  Source: hashicorp/packer

BUG:

We are trying to create a Ubuntu Vagrant box using hyperv-iso image type. We are stuck with the error Waiting for SSH to be available. After a few minutes, it times out and the build fails.

bug buildehyperv

Most helpful comment

For Ubuntu (I just noticed the JSON file above), make sure you have these packages being installed via your config:

d-i pkgsel/include string curl openssh-server sudo sed linux-tools-$(uname -r) linux-cloud-tools-$(uname -r) linux-cloud-tools-common

and you probably need to run this command (assuming your trying to login via root to provision the box):

d-i preseed/late_command string                                                   \
        sed -i -e "s/.*PermitRootLogin.*/PermitRootLogin yes/g" /target/etc/ssh/sshd_config

All 69 comments

please add debug log output by running packer with the environment variable PACKER_LOG=1 set

packerlog.txt

Attached detailed debug log.

Are you seeing an ip address assigned to them vm? Is it able to download the speed file or run updates?

If gets an ip address make sure that ssh server is up and running. Check that your user is configured for ssh server. Check that firewall on the vm. Can you ssh to the vm?

Then check firewall on machine running Packer and anywhere between. Windows firewall has blocked access to Packer's http server for me before.

IP address is not assigned to the VM. It is not able to run updates. It gets stuck waiting for the SSH connection to happen.

I've found Ubuntu install hangs on Hyper- - it's an old post but maybe...

There seems to be no solution to this IMO with the current configuration. The issue is not with packer but rather with the provider implementation using hyper-v.

Is there is a working example somewhere within packer?

I am having this problem too,
OS: Windows 10
Image: Ubuntu 16.04.02
Configuration: Same across both Windows Machines
Packer Version:
_1.0.2 - Fails_
_0.12.3 Works_

I observe the same symptoms. What data can I gather and deliver to move the issue forward?

I've wrestled with this issue many times. You need to get the Hyper-V plugin running inside the VM during the install process or packer will never detect the IP and thus never connect. It's trickier than it sounds. Especially if you only want to install the Hyper-V plugins when building Hyper-V boxes. I've managed to get it working on Debian, Ubuntu, Alpine, Oracle, CentOS, RHEL, Fedora, Arch, Gentoo and FreeBSD. See here. Which target are you going for?

As an aside, a major hurdle I've been having is the installer finishing, then rebooting, only it doesn't eject the install media, and boots from it again. That issue can also cause the symptom you're seeing. It would be nice if packer setup the machines with the hard disk higher in the boot priority, or it auto detected the reboot and ejected... as I never hit this issue on VMWare, Virtualbox or QEMU.

For Ubuntu (I just noticed the JSON file above), make sure you have these packages being installed via your config:

d-i pkgsel/include string curl openssh-server sudo sed linux-tools-$(uname -r) linux-cloud-tools-$(uname -r) linux-cloud-tools-common

and you probably need to run this command (assuming your trying to login via root to provision the box):

d-i preseed/late_command string                                                   \
        sed -i -e "s/.*PermitRootLogin.*/PermitRootLogin yes/g" /target/etc/ssh/sshd_config

This is amazing advice, @mwhooker can this be added to the hyperv-iso documentation on packer.io to ensure success with this great tool and relieve frustration :)

Especially if you only want to install the Hyper-V plugins when building Hyper-V boxes.

There is a nifty tool for determining what hypervisor you are running on virt-what. Example

You need to get the Hyper-V plugin running inside the VM during the install process or packer will never detect the IP and thus never connect.

If that is the case it should definitely be documented.

Wow, thx for all answers.

There is a nifty tool for determining what hypervisor you are running on virt-what.

https://people.redhat.com/~rjones/virt-what/

I prefer dmidecode, as it uses far fewer dependencies, and is more generally available.

if [[ `dmidecode -s system-product-name` == "VirtualBox" ]]; then
fi
if [[ `dmidecode -s system-manufacturer` == "Microsoft Corporation" ]]; then
fi
if [[ `dmidecode -s system-product-name` == "VMware Virtual Platform" ]]; then
fi
if [[ `dmidecode -s system-product-name` == "KVM" || `dmidecode -s system-manufacturer` == "QEMU" ]]; then
fi

Or for those situations where dmidecode and awk aren't available, such as during an automated install process, all you really need is dmesg and grep. For example, with Debian I use:

d-i preseed/late_command string                                                   \
        sed -i -e "s/.*PermitRootLogin.*/PermitRootLogin yes/g" /target/etc/ssh/sshd_config ; \
        dmesg | grep "Hypervisor detected: Microsoft HyperV" ; \
        if [ $? -eq 0 ]; then \
          chroot /target /bin/bash -c 'service ssh stop ; echo "deb http://deb.debian.org/debian jessie main" >> /etc/apt/sources.list ; apt-get update ; apt-get install hyperv-daemons' ; \
          eject /dev/cdrom ; \
        fi

I try install Ubuntu 16.0.4 using

  • Hyper-V on Windows 10: 10.0.15063.447 - a version as an output of [System.Environment]::OSVersion.Version
  • Packer: 1.0.2

The gist what contains

  • build configuration file
  • Ubuntu preseed file
  • Packer log

The build is based on https://github.com/geerlingguy/packer-ubuntu-1604

The YouTube video

You can see that installation stuck without any information and didn't end correctly.

On the video between about 2'34 and 3'38" was removed waiting time for timeout time (40 minutes in total).

@it-praktyk see my post above regarding an Ubuntu install on Hyper-V. You need to add the following to your pkgsel/include line:

linux-tools-$(uname -r) linux-cloud-tools-$(uname -r) linux-cloud-tools-common

That is the easiest way to get the Hyper-V daemon setup on Ubuntu during the install process, and should solve your problem.

Yes, I didn't mention but I tried it today also.

Do you build images using Windows 10?

Yes.

The hard way to solve this problem is open the virtual machine console using the Hyper-V manager, wait until it reboots and then login via the console. Once there install the Hyper-V daemons manually, and packer should connect via SSH within 1 or 2 minutes. Note, you might need to manually enable the daemons using systemctl (if varies between distros, and I don't know whether they are enabled by default on Ubuntu.

I should add, that if the daemons are running, and you still can't connect, then you need to manually confirm ssh is working properly... so from the console, run ifconfig to determine the IP and see if you can login using the credentials specified in the packer JSON config. It's possible a setting the sshd_config is blocking access. For example password logins may be disabled, or direct root logins may be disabled.

If you can login manually via the credentials in the JSON file, and you can confirmed the Hyper-V daemons are running (KVP and VSS) and packer still isn't connecting let us know.

I don't think that is a problem specifically related to Hyper-V. We need a topic about how to support OSes that don't have built in drivers/support for the Hypervisor you have selected to use.

I have run into the problem of ejecting the cd rom as well (installing Pfsense). During an installation process there may be multiple reboots (looking at Windows here with patches). The way to tackle that is to eject the cd from the installation process of the OS.

Think of doing something like this:

"<wait10><wait10><wait10><wait10><wait10><wait10><wait10><wait10><wait10><wait10>",
"<wait><leftCtrlOn>c<leftCtrlOff>",
"<wait><enter>",
"<wait>clear<wait><enter>",
"<wait>cdcontrol eject && exit<wait><enter>",

For a real bastard of an install have a look at: https://github.com/taliesins/packer-baseboxes/blob/master/hyperv-pfsense-2.3.2.json

I was experiencing this same issue when trying to build RHEL 7.3 and Ubuntu.

In my case I found that I first had to ensure an External VM switch was already set up within Hyper-V as packer would only create an internal one. This got Ubuntu working OK, but for RHEL I additionally had to install the Microsoft LIS drivers from https://www.microsoft.com/en-us/download/details.aspx?id=51612 as the built-in ones didn't seem to work.

For RHEL 7.3 you need the following in your Kickstart file:


reboot --eject

%post

# Create the vagrant user account.
/usr/sbin/useradd vagrant
echo "vagrant" | passwd --stdin vagrant

# Make the future vagrant user a sudo master.
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
echo "vagrant        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/vagrant
chmod 0440 /etc/sudoers.d/vagrant

VIRT=`dmesg | grep "Hypervisor detected" | awk -F': ' '{print $2}'`
if [[ $VIRT == "Microsoft HyperV" ]]; then
    mount /dev/cdrom /media
    cp /media/media.repo /etc/yum.repos.d/media.repo
    printf "enabled=1\n" >> /etc/yum.repos.d/media.repo
    printf "baseurl=file:///media/\n" >> /etc/yum.repos.d/media.repo

    yum --assumeyes install eject hyperv-daemons
    systemctl enable hypervkvpd.service
    systemctl enable hypervvssd.service

    rm --force /etc/yum.repos.d/media.repo
    umount /media/
fi

%end

I started watching this thread with the hope packer would get better at detecting Hyper-V guest IP addresses (like it does with other providers), but it appear anybody is working on that, so I'm going to mute this topic. As such,if anybody else needs help getting another packer to work with a different distro, please message me directly.

these issues with Hyper-V are specifically related not just to drivers being present, but daemons as well.

https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows

Most of the "popular" distro's now include the required drivers, however they do not, by default include the daemons. Instructions for installing and enabling the daemons are documented on the Distribution specific pages linked at the bottom of that doc. Once the daemons are installed and running, they will report their IPs and you can winrm, powershell, or ssh to heart's content.

As implementation is distro specific, I agree this is not a packer problem, but could very well be remedied by updating the hyperv-iso docs to direct users to the MS docs.

@ladar you can avoid all the mount madness if you force the network to be available in %post with

network --bootproto=dhcp --onboot=on --device=eth0

for some as yet undermined reason, hyperv doesn't seem to initialize the network connection on its own during the installation, forcing in the kickstart with the --onboot=on seems to do the trick. The --device flag may be unnecessary.

@wickedviking The mount in the snippet above is RHEL specific, and is required for RHEL installations because the network repos aren't accessible until you register the machine with the RHN. If the machine is registered, you are correct, those commands aren't needed. For example with my CentOS Kickstart config I pull in the packages via the network.

As for your suggestion above, I don't believe "pointing" at the MS docs is sufficient. The hard part isn't installing the drivers/daemons, as you're correct most distros include them. The hard part is getting Hyper-V builds to include the daemons during installation so that when the machine reboots, the provisioning process will execute automatically.

Notes on what's required for the various operating systems would be nice, but that would require quite a bit of work.

I think I am going to leave this thread running. 50% of the issues people seem to have are related to this topic.

As far as I can tell there is nothing we can do from Packer's side.

To add some complication to the mix, if I install the Hyper-V packages onto an Ubuntu 16.04 guest, I see a difference in behavior between two hosts:

  • Windows 10 host - everything works, Packer gets the IP address after Linux is installed and all is well.
  • Windows 2016 host - does not get the IP address until I shut down the guest and start it again. Restarting without a shutdown (e.g. reboot command) does not make the IP address appear on the host.

The second variation is a bit troublesome, as I cannot easily start it again from within the VM for obvious reasons. Yet Packer has no idea that anything is happening meanwhile, so there is no meaningful way to trigger it externally, either.

No one has mentioned that you can just change the timeout with something like "ssh_timeout": "20m".

Also on CentOS 7.3 and 7.4 just ' reboot --eject' in the kickstart file by itself works for me to avoid the booting from the ISO on reboot.

@ladar did you ever get this to work for Alpine by any chance? I am stuck waiting for the SSH IP address too.

@tomconte yes, I got packer to build Hyper-V images for Alpine 3.5.2 and 3.6.2. Try:

vagrant init generic/alpine35

or

vagrant init generic/alpine36

At this point I'm building 19 distros for 4 different providers (including Hyper-V)... see:

https://app.vagrantup.com/generic

The last holdout was OpenBSD, which I didn't get working until about a month ago (when v6.2 was released).

I say this with the caveat that I'm currently only testing whether vagrant up works properly on the VirtualBox and libvirt providers. I haven't had time to script/run the vagrant provisioning process on Hyper-V yet. I also haven't automated the testing process for the VMWare images, as I don't have a spare license for the VMWare plugin which I can dedicate to the build server. As such your mileage may vary. I've noticed that sometimes packer will build the image using different virtual hardware than what vagrant automatically provisions, which is what led to issues with some of the boxes.

@tomconte as I recall the Alpine magic was in the boot command. Try this bit of JSON:

{
    "type": "hyperv-iso",
    "name": "generic-alpine36-hyperv",
    "vm_name": "generic-alpine36-hyperv",
    "output_directory": "output/generic-alpine36-hyperv",
    "boot_wait": "30s",
    "boot_command": [
        "root<enter><wait>",
        "ifconfig eth0 up && udhcpc -i eth0<enter><wait>",
        "wget http://{{ .HTTPIP }}:{{ .HTTPPort }}/generic.alpine36.vagrant.cfg<enter><wait>",
        "sed -i -e \"/rc-service/d\" /sbin/setup-sshd<enter><wait>",
        "printf \"vagrant\\nvagrant\\ny\\n\" | setup-alpine -f generic.alpine36.vagrant.cfg && ",
        "mount /dev/sda3 /mnt && ",
        "echo 'PasswordAuthentication yes' >> /mnt/etc/ssh/sshd_config && ",
        "echo 'PermitRootLogin yes' >> /mnt/etc/ssh/sshd_config && ",
        "chroot /mnt apk add hvtools && chroot /mnt rc-update add hv_fcopy_daemon default && ",
        "chroot /mnt rc-update add hv_kvp_daemon default && chroot /mnt rc-update add hv_vss_daemon default && ",
        "umount /dev/loop0 && umount /dev/sr0 && eject /dev/cdrom && reboot<enter>"
    ],
    "disk_size": 32768,
    "ram_size": 2048,
    "cpu": 2,
    "http_directory": "http",
    "iso_url": "https://mirror.leaseweb.com/alpine/v3.6/releases/x86_64/alpine-virt-3.6.2-x86_64.iso",
    "iso_checksum": "92c80e151143da155fb99611ed8f0f3672fba4de228a85eb5f53bcb261bf4b0a",
    "iso_checksum_type": "sha256",
    "ssh_username": "root",
    "ssh_password": "vagrant",
    "ssh_port": 22,
    "ssh_timeout": "3600s",
    "shutdown_command": "/sbin/poweroff",
    "generation": 1,
    "skip_compaction": false,
    "enable_secure_boot": false,
    "enable_mac_spoofing": true,
    "enable_dynamic_memory": false,
    "guest_additions_mode": "disable",
    "enable_virtualization_extensions": false
}

@ladar, where can I found the JSON files used to build generic boxes?

Thank you in advance.

Dang, I wish Packer would do a better job helping Hyper-V users get setup for HTTP servers and preseeding. I tried allowing packer.exe through the Windows firewall, but I'm still seeing that the guest (Debian in my case) cannot connect to the HTTP server for preseeding.

@it-praktyk they are stored on a private git server. The ISOs are too large for GitHub, and since nobody has ever asked for them, I didn't think it worth the time to sanitize the repo and upload all of my files to GitHub. I'll attach the JSON file to this message, if that's all your after.

generic-hyperv.json.txt

@ladar, thank you for sharing the file.

I'm very interested to cooperate in sanitization process - even in the private repo just now.

I think that creating some kind of a references platform to repetitive builds will be valuable for the community.

If you are interested in my proposal please let me know.

@ladar, want to have sources too. I need Alpine Linux sources, want to build it for Parallels VM. Why not to publish it on GitHub, without ISOs?

@m-emelchenkov which sources? The shell scripts? Those are relatively boiler plate. If you meant Alpine Linux sources, those are at available https://alpinelinux.org/

I'm working on adding a Parallels version. I just need to get my hands on a sufficiently fast enough Mac before it can happen.

As for why the repo isn't on GitHub, I'd need to sanitize the history before I could upload it, as early versions of my scripts contain tokens/serial numbers (since moved to a .credentialsrc file), etc.

@ladar Yes, I meant shell scripts. I already created my own box for Alpine with Parallels VM. It is not uploaded to Vagrant Cloud yet, beacuse I need testers first. It’s here (including .box binary): https://bitbucket.org/m-emelchenkov/vagrant-alpine.

I've also noticed a strange pattern in my ubuntu 16.04 image builds: hyperv on win server 2012 works great, but on ws2016 fails.

I recall Ubuntu newer than 16.01 had some issues with the Hyper-V integration services crashing on first boot, leading to Packer not being able to get the IP address. I am stuck on creating images with 16.01 because of this.

What I found is that the behavior depends on whether Hyper-V has pre-configured external virtual switch:
image

When Packer does not find one, it automatically creates a virtual switch with "Internal network" type and this leads to SSH stuck.

This was tested with Packer 1.2.0 in Windows 10 by this project: https://github.com/chef/bento/blob/master/centos/centos-7.4-x86_64.json

For those who asked... I finally got around to sanitizing the commit history for my templates, and removed all the large files (like RHEL ISOs), and various tokens/license keys. Should anyone be inclined, the repo is available at: https://github.com/lavabit/robox/ aka my robot box building system. Feel free to suggest improvements.

@shurick81 I noticed what you described as well. The docs make it clear you need an "external switch" or packer will create one, but the switch packer creates is configured as internal, and thus doesn't work properly. The external switch requirement is documented though:
https://www.packer.io/docs/builders/hyperv-iso.html#switch_name

And thus I considered it a separate issue from what normally prevents packer from discovering the guest IP address, which is the requirement that the guest have the "hyperv" guest tools/kernel modules.

Also note, that currently packer v1.2.4 and above no longer work properly with the FreeBSD/OpenBSD hyperv implementations. See this issue: https://github.com/hashicorp/packer/issues/6315

The update also broke the Ubuntu 18.04/18.10 install process without. provided you don't include the following in your auto-install script.

d-i pkgsel/upgrade select full-upgrade

This happens because the kernel image on the ISOs doesn't seem to work properly, but is fixed by the "full-upgrade" which will pull down and install newer kernel and cloud tool packages that work properly.

Hello All,
I'm having issues with CENTOS guest. The instalation completes but Packer keep Waiting for SSH to become available.

I'm running packer 1.3.1 with Powershell and created the network and also did the tips suggested here.

`PS C:\cygwin64\home\flmmartins\workspace\my-packer-templates> C:\Users\flmmartinspacker.exe build .\centos7-x86_64-hyperv.json
hyperv-iso output will be in this color.

Warnings for build 'hyperv-iso':

  • Hyper-V might fail to create a VM if there is not enough free memory in the system.

==> hyperv-iso: Creating build directory...
==> hyperv-iso: Retrieving ISO
hyperv-iso: Found already downloaded, initial checksum matched, no download needed: http://mirror.serverbeheren.nl/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1804.iso
==> hyperv-iso: Starting HTTP server on port 8638
==> hyperv-iso: Creating switch 'HyperVNAT' if required...
==> hyperv-iso: switch 'HyperVNAT' already exists. Will not delete on cleanup...
==> hyperv-iso: Creating virtual machine...
==> hyperv-iso: Enabling Integration Service...
==> hyperv-iso: Setting boot drive to os dvd drive C:\cygwin64\home\flmmartins\workspace\my-packer-templatespacker_cache\78c7586f1d53df7ffd07552c5f332442003e4f937d4949c9c97cf96bc42dbcbf.iso ...
==> hyperv-iso: Mounting os dvd drive C:\cygwin64\home\flmmartins\workspace\my-packer-templatespacker_cache\78c7586f1d53df7ffd07552c5f332442003e4f937d4949c9c97cf96bc42dbcbf.iso ...
==> hyperv-iso: Skipping mounting Integration Services Setup Disk...
==> hyperv-iso: Mounting secondary DVD images...
==> hyperv-iso: Configuring vlan...
==> hyperv-iso: Starting the virtual machine...
==> hyperv-iso: Attempting to connect with vmconnect...
==> hyperv-iso: Waiting 5s for boot...
==> hyperv-iso: Host IP for the HyperV machine: 192.168.178.14
==> hyperv-iso: Typing the boot command...
==> hyperv-iso: Waiting for SSH to become available...`

JSON:
{
"variables": {
"iso_url": "http://mirror.serverbeheren.nl/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1804.iso",
"iso_check_type": "sha1",
"iso_check": "13675c6f74880e7ff3481b91bdaf925ce81bda8f",
"vmlinuz_file": "/images/pxeboot/vmlinuz",
"initrd_file": "/images/pxeboot/initrd.img",
"ks_file": "centos7-x86_64/ks.cfg",
"hyperv_switch": "HyperVNAT"
},
"builders": [
{
"type": "hyperv-iso",
"vm_name": "CentOS75",
"iso_urls": "{{ user iso_url}}",
"iso_checksum": "{{user iso_check}}",
"iso_checksum_type": "{{user iso_check_type}}",
"switch_name": "{{ user hyperv_switch}}",
"communicator": "ssh",
"cpu": 1,
"disk_size": 20480,
"generation": 1,
"headless": false,
"ram_size": 1024,
"output_directory": "PCENTOS",
"boot_command": [
" text {{user vmlinuz_file}} initrd={{user initrd_file}} inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/{{user ks_file}}"
],
"http_directory": "http",
"boot_wait": "5s",
"ssh_timeout": "20m",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"shutdown_command": "sudo -S shutdown -P now"
}]
}

Kickstart:

`# RHEL7 Base Box Kickstart for VirtualBox and Vagrant

install
cdrom
lang en_US.UTF-8
keyboard us
unsupported_hardware
text
skipx
network --bootproto dhcp
firewall --disabled
auth --useshadow --enablemd5
rootpw --iscrypted $1XAC8Ni/Z5cY
selinux --disabled
timezone Europe/Amsterdam
bootloader --location=mbr --driveorder=sda --append="crashkernel=auto rhgb quiet noipv6"
services --disabled iptables,ip6tables --enabled sshd

zerombr
clearpart --all --initlabel
autopart
firstboot --disabled
eula --agreed
services --enabled=NetworkManager,sshd
reboot --eject
user --name=vagrant --plaintext --password vagrant --groups=vagrant,wheel

%packages --ignoremissing --excludedocs
@Base
@Core
@Development Tools
@network-tools
openssh-clients
sudo
openssl-devel
readline-devel
zlib-devel
kernel-headers
kernel-devel
net-tools
vim
wget
curl
rsync
ansible

%end

%post

Disable SELINUX per https://access.redhat.com/solutions/1237153

sed -i -e 's/(^SELINUX=)enforcing$/\1disabled/' /etc/selinux/config

yum update -y
echo "vagrant ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/vagrant
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers

yum clean all

Enable hyper-v daemons only if using hyper-v virtualization

VIRT=dmesg | grep "Hypervisor detected" | awk -F': ' '{print $2}'
if [[ $VIRT == "Microsoft HyperV" ]]; then
mount /dev/cdrom /media
cp /media/media.repo /etc/yum.repos.d/media.repo
printf "enabled=1\n" >> /etc/yum.repos.d/media.repo
printf "baseurl=file:///media/\n" >> /etc/yum.repos.d/media.repo

yum --assumeyes install eject hyperv-daemons
systemctl enable hypervkvpd.service
systemctl enable hypervvssd.service

rm --force /etc/yum.repos.d/media.repo
umount /media/

fi
%end`

@flmmartins at some point over the last month, several of my Hyper-V builds started crashing during the post installation reboot. Shutting them down manually, and restarting fixed the issue. To preserve automation I had to add vga=792 to the kernel boot command. That fixed the issue. I then used the vga.sh script to remove that kernel parameter after updating the box. Feel free to look at my templates:

https://github.com/lavabit/robox/

Just an update on this issue. I've managed to overcome some of the issues people were facing through the use of a legacy network adapter (see #7128). And then overcome issues with the guest rebooting after install by changing the boot order (see #7147), leaving with me with a VM booted and ready for connections. Unfortunately the lack of Hyper-V daemon support chronicled above is blocking further progress. I tried working around that issue using pre-known IP addresses and the ssh_host config key, but ran into issue #4825. See my full report on that here.

@ladar if we get #4825 fixed, will we eliminate the need for the Hyper-V daemons?

also, @ladar, how would you feel about me linking to your robox repo from our community tools page?

@SwampDragons the Hyper-V daemons are still needed to _autodetect_ a guest IP address, and that is obviously necessary when the guest IP address is unpredictable, such as with most DHCP configurations. What this fix does, is give the user the ability to use a defined hard-coded, predictable IP, in the packer config file. That means the guest IP must be known before the guest is created, and the build begins such as with a static IP configuration, or with DHCP assignments (see below). If a _predefined IP address is used_, then yes, the Hyper-V daemons are not required (with this patch). BUT, I consider this a fall-back, workaround route that only the most determined should follow, which I was forced to tread while creating NetBSD and DragonflyBSD Hyper-V boxes. Those distros lack support for the Hyper-V virtual NICs (see the already merged legacy adapter pull request), and the Hyper-V daemons. So my only choice was to use this workaround. In my case, I predefine guest MAC addresses in the builder config, and then setup corresponding rules on my DHCP server so those MAC addresses get the same, predictable IP address everytime I build the configuration. Unfortunately that means the config will fail, if say, a random person tries to build a NetBSD/DragonflyBSD box, but doesn't realize they must create equivalent DHCP rules first. (JSON comments...!)

Of course I'm assuming packer strip that predefined MAC address off before packaging the vagrant box, but haven't had the time. If it doesn't end users will of course have problems if they try to deploy multiple versions of the same NetBSD/DragonflyBSD image on their Hyper-V network.

Did I make sense?

@SwampDragons yes you are welcome to add the robox repo.

yep yep makes sense; thanks for the clarification.

I think the way I want to move forward on getting this issue closed is to 1) merge 4825, and 2) clearly document the need for the daemons and the affected operating systems.

Merge 4825? Isn't that an issue? As I understand it, there are stale/closed pull requests which have tried to find alternative ways to detect the guest IP, without relying on the Hyper-V daemons, but those attempts have all failed?

Yes, I agree the Hyper-V daemon issue needs better documentation. I think part of the problem is that some distros, namely Ubuntu, auto-detect Hyper-V, and then auto-install the daemons... which makes the existing documentation examples deceptively simple. A full write up, with workarounds for various distros, would be a big task though, as every distro required a different approach.

In general distros require the daemons to be installed separately, which varies in difficulty depending on the distro/installer. And that isn't well documented on the packer website. But it's also hard to describe. With some distros it's easy, just add a few script commands to the autoinstall config. But with some distros, it's much, much harder.

But of course, there are also some operating systems where the daemons just aren't available in any form. That was the case with NetBSD (there is a fork with experimental support, but it would require rebuilding the entire install ISO from source to use).

Hence, why I finally relented, and sought the use of the ssh_host option. In my mind, that option "should" work, at least according to the documentation. And it does work with some of the builders. Just not with Hyper-V... at least without my patch. That is why I view the ssh_host issue as an independent bug seperate from the larger Hyper-V daemon issue.

That said, based on the issues I've seen opened the last couple of years, I think the boot ordering problem, and possibly legacy network adapter issues, were causing a subset of the reported failures, which is why I referenced this issue in my write ups. I think those albeit minority problems, had nothing to do with the Hyper-V daemon issue, but got lumped in with the Hyper-V daemon issue.

Bug profiling at its worst.

@SwampDragons I'm not aware of a PR which resolves the need for the Hyper-V daemons when auto detecting the IP, but I could be wrong. All I know of are the workarounds, like those I put in my configs, which force the daemons to be installed, and/or use a predisposed/static IP to avoid needing the daemons altogether. Of course the latter only worked once the ssh_host bug was fixed.

Sorry, I meant #7154, which provides a workaround (static IP), as you said.

@SwampDragons d'oh. I thought you were saying we shouldn't fix the ssh_host bug. Glad I was wrong!

nope, already merged it!

The sad part is that I don't think we can "solve" this daemon problem on the Packer side. We can document the need for daemons, but it's beyond the scope of our docs to provide intimate detail on how to run every operating system on every hypervisor. And it's definitely beyond the scope of the tool to install daemons on guest systems when they boot. So I think that documenting it well and providing a workaround is the best we're going to get.

@SwampDragons I agree. I just wanted to include my various fixes, so that people who hit the 'Waiting for SSH issue' realize, it might not be the daemon issue, and/or know about the ssh_host workaround.

If anybody does decide to tackle a writeup, they are welcome to rip the relevant portions from my configurations, and use them as examples.

If someone decides to work on this issue again, I think the solution might be looking up the guest MAC address in the ARP table. I confirmed that a guest IP is present in the ARP table, even if the Hyper-V daemons are missing (see screenshot).

The possible drawbacks I can think of are: with this strategy the hypervisor, and thus packer would have problems detecting/handling IP address changes... in particular situations where a stale IP address is in the ARP table, but was reassigned, or situations where the IP is changed between guest reboots. (The former could be solved by checking, and removing the entry, if present, from the ARP table before the guest is booted.)

Just throwing around ideas.

screenshot from 2019-01-21 23-51-58

Another workaroud for this problem is to create both internal and external vswitches then share external with internal (ncpa.cpl -> r_click -> properties -> sharing).
Solution found here, hope that will help someone in the future:)

For me on Windows 10 the firewall and the fact that the Hyper-V Standardswitch was not identified and thus treated as public were the problems.

This should go into the documentation.

Fix (run in Powershell as admin):
```pwsh
$VS = "Standardswitch"
$IF_ALIAS = (Get-NetAdapter -Name "vEthernet ($VS)").ifAlias
New-NetFirewallRule -Displayname "Allow incoming from $VS" -Direction Inbound -InterfaceAlias $IF_ALIAS -Action Allow
Set-NetConnectionProfile -InterfaceAlias $IF_ALIAS -NetworkCategory Private
````

Regarding the ISO not getting ejected on Generation 2 VMs, why not have a boot command such as that will eject the DVD. AFAIK there is powershell to do the same https://goodworkaround.com/2012/11/08/eject-dvd-iso-from-hyper-v-2012-using-powershell/

That should fix that issue for all old OSs where the installer cannot eject itself.

I hit this (again) while trying to create Photon OS VM using DHCP - and a hyperv-daemons package isn't available for Photon OS... which is kind of a show stopper for me :disappointed:

Is there some other way I can tell Packer what the guest VM IP is? I'm not sure if there is any way to communicate with the Packer client while it's building?

I don't mind some scripting, or even if there is a way to manually input the IP address, but I need some way to tell Packer what it is!

@cocowalla You can manually input the IP address using the ssh_host option: https://www.packer.io/docs/communicators/ssh.html#ssh_host but you'll need to make sure your preseed file sets up a static IP.

@SwampDragons as you mentioned though, that's only going to work for static IPs. I'm already aware of the ssh_host option, so I'm all good on that front, but DHCP is often required.

I was thinking more along the lines of some way to programmatically (or even interactively) provide the IP during the build, while Packer is waiting for the IP. I thought perhaps it might listen for commands over HTTP, for example.

Thankfully it turned out that Photon does include the Hyper-V daemon, it's just that they gave the package a different name (hyper-v).

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

mvermaes picture mvermaes  ·  3Comments

wduncanfraser picture wduncanfraser  ·  3Comments

frezbo picture frezbo  ·  3Comments

sourav82 picture sourav82  ·  3Comments

Tensho picture Tensho  ·  3Comments