Packer: Feature Request: Create Google Compute Engine Images from scratch/import existing images

Created on 1 Jun 2016  ยท  7Comments  ยท  Source: hashicorp/packer

  • Packer Version: Current master (commit sha 1256bab)
  • Host platform: Google Compute Engine

Currently, in packer/website/source/intro/platforms.html.md, the docs seem to imply that Packer uses PD snapshots instead of generating GCE private images from scratch. Is that still accurate? My understanding is that initially, it was not possible to create GCE Images from scratch, so you had to snapshot a persistent disk to generate the image, but that was changed at some point. Google's docs seem to imply that it is possible to create GCE Images from scratch: https://cloud.google.com/compute/docs/tutorials/building-images
(There is even a guide on automating image builds with Jenkins, Packer, and Kubernetes: https://cloud.google.com/solutions/automated-build-images-with-jenkins-kubernetes )

Is this just a case of docs being inaccurate or does Packer currently rely on PD snapshots instead of private image builds?

buildegoogle need-more-info question

Most helpful comment

@ido I think you are misunderstanding the role of a builder in Packer. The GCE builder should run on GCE. What you are describing sounds like a post-processor that imports images into GCE similar to amazon-import.

All 7 comments

This is a great question. Maybe to carry this a little bit further: does packer support this?

https://cloud.google.com/compute/docs/images/import-existing-image

You can import existing Amazon AMI, VirtualBox, and raw disk images as described in the URL above, which should be implemented according to these guidelines:

https://cloud.google.com/compute/docs/tutorials/building-images

The Packer Google Compute Engine image builder should be modified to build raw disk images, VirtualBox images, or Amazon AMIs, then upload them to Google Cloud Storage for use in GCE as described in the first link above... Seems like retooling the Amazon AMI or VirtualBox builders and adding GCS upload might be the easiest route.

@ido I think you are misunderstanding the role of a builder in Packer. The GCE builder should run on GCE. What you are describing sounds like a post-processor that imports images into GCE similar to amazon-import.

Builders create the images from templates. Let me elaborate: Until recently, I do not think this new way of uploading/importing pre-existing images to GCE existed, which is why the builder ran on GCE - snapshotting PDs on GCE was the only way to create a new image originally. (There was no notion of images like Amazon AMIs that can be uploaded, you had to derive a new image from an existing image's PD snapshot.) That's changed, and so should the GCE builder (in my opinion). Now, you can build your image anywhere, upload it to GCS, and import it to GCE, rather than having to overwrite one of the existing-GCE-image-based persistent disks with your filesystem, snapshot it, save the snapshot and import it as an image as before...

Why create GCE Images on GCE (which is just a matter of running through templates in a VM and then saving the VM disk as a RAW/OVF/QCOW/VMDK/etc. file), when they can be created locally and then uploaded to GCS + imported as GCE images from GCS (with the uploading step perhaps happening in a GCS uploader post-processor as you suggest)? Besides the fact that building the image on GCE is less secure, adds complexity, requires a network connection, etc. etc., there is a simple cost reason: To create the images on GCE, snapshot, then copy to GCS and add to images you pay GCE compute costs + PD/snapshot costs + GCS storage costs, but instead you can now just create the image (run the scripts/templates) locally in VBox/QEMU and upload the resulting disk image to GCS (so you pay GCS storage costs only). This cost can become significant if you build your images in CI or CD (or a git post-receive hook) to generate immutable images. (Also, doing the image build on GCE is more prone to errors in my opinion since there are significantly more moving parts to running a GCE instance off of an existing PD than just spawning a local VM or running scripts locally with a file/loopback device mounted.)

If you are simply suggesting that I could build the GCE-destined image as a VirtualBox image then upload in post-processing as mentioned above, then each user of that GCE local image builder (aka VBox image builder) would have to handle all the GCP-specific things as described in the first link in the OP, including installing the GCP image packages to the image... It seems more appropriate to me to add a builder to handle all this locally then upload to GCS in post-processing.

Maybe instead of replacing the GCE PD snapshot based builder (which spins up an actual instance and snapshots a PD), we should have a new "GCE local" builder instead?

fyi, @ido - are you sure that GCE supports VMDK etc? that link https://cloud.google.com/compute/docs/images/import-existing-image doesn't say anything about that - and per this it seems that it supports raw images only, so there would need to be a conversion step if you're working with VM disk formats

@jcrben yes, I was assuming the tool would be capable of calling qemu-img or similar to convert the image.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings