I am using the packer:light docker image from docker hub. Builder type is amazon-ebs. The template includes:
"shutdown_behavior": "terminate",
"force_deregister": true,
"force_delete_snapshot": true,
After having a first-time-successful build (no deletion necessary), the second build throws an error:
``
==> amazon-ebs: Stopping the source instance...
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Deregistered AMI XXXXXXXXX, id: ami-XXXXXXXX
==> amazon-ebs: Deleted snapshot: snap-XXXXXXXXXXXXXXXXXX
==> amazon-ebs: Error deleting existing snapshot: InvalidParameter: 1 validation error(s) found.
==> amazon-ebs: - missing required field, DeleteSnapshotInput.SnapshotId.
==> amazon-ebs:
==> amazon-ebs: Terminating the source AWS instance...
````
The AMI and the snapshot are successfully deleted (albeit the error indicating otherwise), so the next build is green again. Running withoutforce_delete_snapshot` works, but of course leaves the snapshots lying around. Short term solution is to run the job twice - but this also means paying amazon twice for the build-instance . . .
Error is probably originating from here: builder/amazon/common/step_deregister_ami.go
Go-SDK-Docu is here: http://docs.aws.amazon.com/sdk-for-go/api/service/ec2/#EC2.DeleteSnapshot
I have no go experience, but from looking at the code and the SDK docu, the only idea I have is that b.Ebs.SnapshotId is not of type aws.String. But then again the snapshot IS deleted - so I don't really know.
I'm unable to reproduce this. Are you mounting any extra volumes? Please post a minimal but complete json that reproduces this.
Sorry for the delay, I had to trim down our config. Yes, i am using 2 volumes in my AMIs. We use a template approach to configure packer:
template.json
{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `region`}}",
"source_ami": "{{user `source_ami`}}",
"vpc_id": "{{user `vpc_id`}}",
"subnet_id": "{{user `subnet_id`}}",
"security_group_id": "{{user `security_group_id`}}",
"launch_block_device_mappings": [{
"device_name": "{{user `launch_device_name`}}",
"volume_size": "{{user `launch_volume_size`}}",
"volume_type": "{{user `launch_volume_type`}}",
"delete_on_termination": true
}],
"ami_block_device_mappings": [
{
"device_name": "{{user `second_device_name`}}",
"volume_size": "{{user `second_volume_size`}}",
"volume_type": "{{user `second_volume_type`}}",
"delete_on_termination": true
}],
"instance_type": "{{user `instance_type`}}",
"ami_name": "{{user `ami_name`}}",
"ebs_optimized": true,
"communicator": "ssh",
"ssh_username": "ec2-user",
"ssh_private_ip": true,
"shutdown_behavior": "terminate",
"force_deregister": true,
"force_delete_snapshot": true
}],
"provisioners": [{
"type": "shell",
"inline": [ "sudo yum -y update" ]
}]
}
Here is my value-json, you probably need to adjust at least the subnet and vpn and subnet:
values.json
{
"region": "us-west-2",
"source_ami": "ami-8ca83fec",
"vpc_id": "vpc-XXXXXXXX",
"subnet_id": "subnet-XXXXXXXX",
"security_group_id": "sg-XXXXXXXX",
"launch_device_name": "/dev/xvda",
"launch_volume_size": "30",
"launch_volume_type": "gp2",
"second_device_name": "/dev/xvdb",
"second_volume_size": "8",
"second_volume_type": "gp2",
"instance_type": "c4.large",
"ami_name": "packer-issue-4817"
}
I am using your docker image to run packer, you need to supply AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables.
run.sh
docker pull hashicorp/packer:light
docker run --rm -t -v $PWD:/workdir/ -w /workdir/ \
-e AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" \
-e AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" \
hashicorp/packer:light build \
-var-file=/workdir/values.json \
/workdir/template.json
@mwhooker let me know if you need more details to reproduce this. Thx.
Thanks for updating the json. I haven't had a chance to look into this yet. Will let you know what I find. I think this is probably related to the use of launch block device
ahh - that actually makes sense - i'll try to work around this by creating an image with a large enough root partition so that I can skip the lounch-block-device part
@mwhooker or @freme What was the solution to this? I have a similar issue.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.