We reference local vmware isos by using iso_url, remote_cache_directory, and remote_host. We just upgraded to the latest packer release 1.4.5, and now this does not work. We see this error in the jenkins build:
16:20:08 ==> vmware-iso: Retrieving ISO
16:20:08 ==> vmware-iso: Trying ubuntu-16.04.6-netboot-amd64.iso
16:20:08 ==> vmware-iso: Trying ubuntu-16.04.6-netboot-amd64.iso
16:20:08 ==> vmware-iso: Download failed source path error: stat /home/jenkins/workspace/packer-build/ubuntu-16.04.6-netboot-amd64.iso: no such file or directory
16:20:08 ==> vmware-iso: error downloading ISO: [source path error: stat /home/jenkins/workspace/packer-build/ubuntu-16.04.6-netboot-amd64.iso: no such file or directory]
It appears that packer is trying to find the iso in the Jenkins workspace.
Create a builder that uses vmware-iso and references a local iso:
...
"type": "vmware-iso",
"remote_cache_directory": "{{ user `vmware_remote_cache_directory` }}",
"remote_host": "{{ user `remote_host` }}",
"iso_url": "ubuntu-16.04.6-netboot-amd64.iso",
...
vars:
"vmware_remote_cache_directory": "iso",
"vmware_remote_type": "esx5",
"vmware_remote_username": "root",
"remote_host": "{{ env `REMOTE_HOST` }}",
````
### Packer version
1.4.5
### Simplified Packer Buildfile
{
"boot_command": [
"<tab><wait>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"linux ",
"console-setup/ask_detect=false ",
"keyboard-configuration/layout=USA ",
"keyboard-configuration/variant=USA ",
"locale=en_US.UTF-8 ",
"priority=critical ",
"url=http://{{ user `vmware_http_host` }}:{{ .HTTPPort }}/ubuntu-16.04-amd64.preseed.cfg ",
"initrd=initrd.gz ",
"--- ",
"cgroup_enable=memory ",
"ipv6.disable={{ user `vmware_ipv6_disable` }}",
"swapaccount=1",
"<enter>"
],
"boot_key_interval": "{{ user `vmware_boot_key_interval` }}",
"cpus": "{{ user `vmware_cpus` }}",
"disk_adapter_type": "{{ user `vmware_disk_adapter_type` }}",
"disk_size": "{{ user `vmware_disk_size` }}",
"disk_type_id": "{{ user `vmware_disk_type_id` }}",
"format": "{{ user `vmware_format` }}",
"guest_os_type": "ubuntu-64",
"http_directory": "{{ user `vmware_http_directory` }}",
"http_port_max": "{{ user `vmware_http_port` }}",
"http_port_min": "{{ user `vmware_http_port` }}",
"iso_checksum": "",
"iso_checksum_type": "none",
"iso_url": "ubuntu-16.04.6-netboot-amd64.iso",
"memory": "{{ user `vmware_memory` }}",
"network_adapter_type": "{{ user `vmware_network_adapter_type` }}",
"output_directory": "{{ user `name` }}-{{ user `operating_system` }}-{{ user `uuid` }}",
"remote_cache_directory": "{{ user `vmware_remote_cache_directory` }}",
"remote_host": "{{ user `remote_host` }}",
"remote_password": "{{ user `remote_password` }}",
"remote_type": "{{ user `vmware_remote_type` }}",
"remote_username": "{{ user `vmware_remote_username` }}",
"shutdown_command": "sudo shutdown -P now",
"skip_validate_credentials": "{{ user `vmware_skip_validate_credentials` }}",
"ssh_private_key_file": "{{ user `vmware_ssh_private_key_file` }}",
"ssh_timeout": "{{ user `vmware_ssh_timeout` }}",
"ssh_username": "{{ user `vmware_ssh_username` }}",
"type": "vmware-iso",
"version": "{{user `vmware_version` }}",
"vm_name": "{{ user `name` }}-{{ user `operating_system` }}-{{ user `uuid` }}",
"vmdk_name": "{{ user `name` }}-{{ user `operating_system` }}-{{ user `uuid` }}",
"vmx_data": {
"annotation": "Packer version: {{ packer_version }}|0ATemplate creation time: {{ isotime \"2006.01.02 15:04:05\" }}|0ATemplate created by: SRE",
"ethernet0.networkName": "{{ user `vmware_network_name` }}"
},
"vmx_data_post": {
"bios.hddorder": ""
},
"vmx_remove_ethernet_interfaces": "{{ user `vmware_vmx_remove_ethernet_interfaces` }}",
"vnc_disable_password": "{{ user `vmware_vnc_disable_password` }}"
}
```
docker container, ubuntu bionic
https://gist.github.com/DI-DaveGoodine/59cdecc714e3a683bf3863fb0c45d7df
Thanks for reaching out. What version were you using before the upgrade?
Glancing at the builder, this code hasn't been touched in a long time, so before I get too deep into reproducing this...
If I remember this correctly we've always required you to have the iso cached locally before we check whether it's in the remote cache. That's probably not the best UX, but it is, I think, how we've done things at least for the last several years so it wouldn't technically be a regression. Did you once have a packer_cache on jenkins where this iso was stored that you recently removed?
If you did, and having the iso locally means that the build succeeds without re-uploading the iso to the remote instance, I think we can consider this a feature request ("please don't force me to download the iso if it's already uploaded to the remote host") rather than a regression.
Hello,
We were using v1.3.3 before switching to v1.4.5.
I do not believe we have used a packer_cache on Jenkins. Our build runs in the context of an ephermeral docker container, created on-demand by the master when the packer build is triggered. It then runs terraform apply to create a new vmware vm, which becomes the remote_host for the vmware iso build.
Is the packer_cache something that resides on the jenkins master?
Hmm, and nothing about your jenkins/docker setup changed when you made the Packer upgrade?
I'm not sure _how_ yet, but this may be a side effect from https://github.com/hashicorp/packer/pull/6999. I'm confused about why because even in v 1.3.3 we tried to download the provided URL to the build machine.
Yes, packer_cache would be on your jenkins host. If you provide a URL to a file, we download it and then cache it in a directory that's either ./packer_cache
relative to your current working directory, or that's set by you via the environment variable PACKER_CACHE. The downloaded file is stored with a name that's a hash of the original URL, and in every subsequent build we check that cache for the appropriate hash to see if it's already been downloaded, so we can skip downloading again.
In your template's case, we're not involving a packer_cache -- Packer is interpreting what you're providing in iso_url
a as a filename for something stored on your local filesystem in the current working directory of your build machine. Once found, Packer would want to check for it on the vsphere machine, and if it wasn't there we'd upload it to the remote_cache_directory
on your remote machine too. But the step where your build is failing has always been here, though the details of our implementation have changed between the two versions.
The only change was rebuilding the docker image used to make the jenkins builder:
FROM ubuntu:bionic
ENV ARCH="amd64" JENKINS_HOME="/home/jenkins" PACKER_VERSION="1.4.5" TERRAFORM_VERSION="0.11.13"
RUN apt update && \
apt install --yes \
busybox \
git \
jq \
openjdk-8-jre \
openssh-server \
unzip \
wget && \
rm --force --recursive /var/lib/apt/lists/* && \
wget --output-document - --quiet https://releases.hashicorp.com/packer/${PACKER_VERSION}/packer_${PACKER_VERSION}_linux_${ARCH}.zip | busybox unzip -d /bin - && \
wget --output-document - --quiet https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${ARCH}.zip | busybox unzip -d /bin - && \
wget --quiet http://ausvf-nexus01v.na.drillinginfo.com:8081/nexus/content/repositories/thirdparty/com/vmware/ovftool/4.3.0/ovftool-4.3.0.zip && \
bash ovftool-4.3.0.zip --console --eulas-agreed --required && \
rm --force ovftool-4.3.0.zip && \
chmod +x /bin/packer /bin/terraform && \
ssh-keygen -A && \
useradd --create-home --password asdfasdfasdfasdf --shell /bin/bash jenkins && \
sed --in-place 's|session required pam_loginuid.so|session optional pam_loginuid.so|g' /etc/pam.d/sshd && \
mkdir --parents /var/run/sshd
COPY gitconfig ${JENKINS_HOME}/.gitconfig
COPY ssh ${JENKINS_HOME}/.ssh
RUN chown -R jenkins:jenkins ${JENKINS_HOME} && \
chmod 600 ${JENKINS_HOME}/.ssh/* && \
chmod 700 ${JENKINS_HOME}/.ssh
EXPOSE 22 8275
CMD ["/usr/sbin/sshd","-D"]
So, probably the bionic image changed, as well as some packages.
@SwampDragons This issue https://github.com/hashicorp/packer/issues/7306#issuecomment-471004518 may be of use. This used to be all that was required for using in-place ISOs for the VMware builder.
That comment references the PR (https://github.com/hashicorp/packer/pull/5165) that implemented that feature.
yeah, but looking at those logs:
==> vmware-iso: Retrieving ISO
vmware-iso: Using file in-place: file://./CentOS-7-x86_64-NetInstall-1810.iso
==> vmware-iso: Remote cache was verified skipping remote upload...
the iso was present on the build machine.
The build container (the Dockerfile that Dave pasted above) does not have any of the ISOs baked in.
The only place I could find mention of that string using tag v1.3.3 was here: https://github.com/hashicorp/packer/blob/v1.3.3/common/step_download.go#L163
We just upgraded from 1.3.4 to 1.5.4 and are seeing the same issue. Our remote ESX vmware-iso templates also have iso_url set to non-existing local file system paths. In 1.3.4 packer ignored these because it verified that the ISO already existed on the remote datastore. In 1.5.4, packer attempts to "download" the file to the packer cache (seems a bit unnecessary when it's already on a local file system I might add), but since it doesn't exist it fails.
I tried to workaround this by creating dummy 0-byte iso files so that they exist locally, but packer then errors on checksumming the dummy iso file, even though we have iso_checksum_type set to none. It appears this is another bug, iso_checksum_type=none seems to be ignored if iso_checksum is populated.
Hi! I think https://github.com/hashicorp/packer/pull/9584 may have solved this -- do you think you could try Packer v.1.6.1 and let me know if this is still an issue?
I was able to reproduce this and confirm that this is no longer a problem on v1.6.1 🎉
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I was able to reproduce this and confirm that this is no longer a problem on v1.6.1 🎉