Affected versions: Packer 1.0.1 and above
Host platform: Ubuntu 16.04 LTS
Builder: amazon-chroot
Provisioner: ansible-local
with Ansible v2.3.1.0
The "default extra variables" feature added in Packer v1.0.1 causes the ansible-local
provisioner to fail when an --extra-vars
argument is specified in the extra_arguments
configuration option.
This is the configuration that caused our build to fail:
provisioners: [
{
"type": "ansible-local",
"command": "ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_ROLES_PATH=/tmp/packer-provisioner-ansible-local/galaxy_roles ansible-playbook",
"playbook_dir": "{{template_dir}}/../ansible",
"playbook_file": "{{template_dir}}/../ansible/base.yml",
"staging_directory": "/tmp/packer-provisioner-ansible-local",
"inventory_groups": "ec2,packer",
"extra_arguments": [
"--extra-vars",
"ec2_region={{user `aws_region`}}",
"--tags=install,package",
"-vvv"
]
}
]
The output is as follows:
==> base: Provisioning with Ansible...
base: Uploading Playbook directory to Ansible staging directory...
base: Creating directory: /tmp/packer-provisioner-ansible-local/base
base: Uploading main Playbook file...
base: Uploading inventory file...
base: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/base && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_ROLES_PATH=/tmp/packer-provisioner-ansible-local/base/galaxy_roles ansible-playbook /tmp/packer-provisioner-ansible-local/base/base.yml --extra-vars "packer_build_name=base packer_builder_type=amazon-chroot packer_http_addr=" --extra-vars ec2_region=eu-west-1 --tags=install,package -vvv -c local -i /tmp/packer-provisioner-ansible-local/base/packer-provisioner-ansible-local759366578
base: [WARNING]: provided hosts list is empty, only localhost is available
base:
base: PLAY [all] *********************************************************************
base: skipping: no hosts matched
The problem occurs because we are specifying the --extra-vars
command line argument as part of the extra_arguments
configuration. From v1.0.1 and above, Packer inserts an additional --extra-vars
argument for the "default extra variables", which means that two --extra-vars
arguments are passed to Ansible.
I never got to the bottom of exactly why this causes the failure.
Note that the documentation at https://www.packer.io/docs/provisioners/ansible-local.html
suggests it is possible to specify --extra-vars
as part of extra_arguments
, and this has worked well for us with earlier versions of Packer.
Workaround: Downgrade to Packer v1.0.0. The above configuration works for us in v1.0.0.
Suggested fix: Provide an extra_vars
configuration option, which is a hash of extra variables, similar to how it's done in Vagrant. Merge the packer_build_name
, packer_builder_type
and packer_http_addr
variables into that hash, and use the hash to build the --extra-vars
argument passed to Ansible.
The code that adds the --extra-vars
argument is in provisioner/ansible-local/provisioner.go
at line 297. This was added via #4821.
Hmm, I'm not sure I agree with the analysis. The is no problem having multiple --extra-vars
.
Can you capture the log line beginning with base: Executing Ansible:
when running Packer v1.0.0
Hey Rickard, that confused me as well, but it's the only difference I could spot between the builds that have worked and the ones that didn't. I even checked the contents of the inventory files and they looked OK.
I've just triggered a new build under Packer v1.0.0, and we get this output:
==> base: Provisioning with Ansible...
base: Uploading Playbook directory to Ansible staging directory...
base: Creating directory: /tmp/packer-provisioner-ansible-local
base: Uploading main Playbook file...
base: Uploading inventory file...
base: Executing Ansible: cd /tmp/packer-provisioner-ansible-local && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_ROLES_PATH=/tmp/packer-provisioner-ansible-local/galaxy_roles ansible-playbook /tmp/packer-provisioner-ansible-local/base.yml --extra-vars ec2_region=eu-west-1 --tags=install,package -c local -i /tmp/packer-provisioner-ansible-local/packer-provisioner-ansible-local635446634
base:
base: PLAY [all] *********************************************************************
base:
base: TASK [Gathering Facts] *********************************************************
base: ok: [127.0.0.1]
etc.
It shouldn't make any difference, but we spin up EC2 instances as build servers when we need them, so this output is from an EC2 instance that started this morning. The one that generated yesterday's output was terminated last night.
The build with Packer v1.0.0 finished successfully. I updated Packer to v1.0.1 and re-ran the build on the same EC2 instance. It failed again:
==> base: Provisioning with Ansible...
base: Uploading Playbook directory to Ansible staging directory...
base: Creating directory: /tmp/packer-provisioner-ansible-local
base: Uploading main Playbook file...
base: Uploading inventory file...
base: Executing Ansible: cd /tmp/packer-provisioner-ansible-local && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_ROLES_PATH=/tmp/packer-provisioner-ansible-local/galaxy_roles ansible-playbook /tmp/packer-provisioner-ansible-local/base.yml --extra-vars "packer_build_name=base packer_builder_type=amazon-chroot packer_http_addr=" --extra-vars ec2_region=eu-west-1 --tags=install,package -c local -i /tmp/packer-provisioner-ansible-local/packer-provisioner-ansible-local371775838
base: [WARNING]: provided hosts list is empty, only localhost is available
base:
base:
base: PLAY [all] *********************************************************************
base: skipping: no hosts matched
The output of packer --version
is:
2017/09/13 09:47:04 Detected home directory from env var: /home/ubuntu
2017/09/13 09:47:04 Using internal plugin for docker
2017/09/13 09:47:04 Using internal plugin for file
2017/09/13 09:47:04 Using internal plugin for null
2017/09/13 09:47:04 Using internal plugin for parallels-pvm
2017/09/13 09:47:04 Using internal plugin for vmware-iso
2017/09/13 09:47:04 Using internal plugin for vmware-vmx
2017/09/13 09:47:04 Using internal plugin for alicloud-ecs
2017/09/13 09:47:04 Using internal plugin for oneandone
2017/09/13 09:47:04 Using internal plugin for profitbricks
2017/09/13 09:47:04 Using internal plugin for qemu
2017/09/13 09:47:04 Using internal plugin for amazon-ebssurrogate
2017/09/13 09:47:04 Using internal plugin for googlecompute
2017/09/13 09:47:04 Using internal plugin for parallels-iso
2017/09/13 09:47:04 Using internal plugin for triton
2017/09/13 09:47:04 Using internal plugin for virtualbox-iso
2017/09/13 09:47:04 Using internal plugin for amazon-instance
2017/09/13 09:47:04 Using internal plugin for amazon-ebs
2017/09/13 09:47:04 Using internal plugin for amazon-ebsvolume
2017/09/13 09:47:04 Using internal plugin for azure-arm
2017/09/13 09:47:04 Using internal plugin for cloudstack
2017/09/13 09:47:04 Using internal plugin for digitalocean
2017/09/13 09:47:04 Using internal plugin for hyperv-iso
2017/09/13 09:47:04 Using internal plugin for openstack
2017/09/13 09:47:04 Using internal plugin for amazon-chroot
2017/09/13 09:47:04 Using internal plugin for virtualbox-ovf
2017/09/13 09:47:04 Using internal plugin for windows-restart
2017/09/13 09:47:04 Using internal plugin for file
2017/09/13 09:47:04 Using internal plugin for shell
2017/09/13 09:47:04 Using internal plugin for shell-local
2017/09/13 09:47:04 Using internal plugin for ansible
2017/09/13 09:47:04 Using internal plugin for chef-solo
2017/09/13 09:47:04 Using internal plugin for converge
2017/09/13 09:47:04 Using internal plugin for powershell
2017/09/13 09:47:04 Using internal plugin for salt-masterless
2017/09/13 09:47:04 Using internal plugin for windows-shell
2017/09/13 09:47:04 Using internal plugin for puppet-server
2017/09/13 09:47:04 Using internal plugin for ansible-local
2017/09/13 09:47:04 Using internal plugin for chef-client
2017/09/13 09:47:04 Using internal plugin for puppet-masterless
2017/09/13 09:47:04 Using internal plugin for amazon-import
2017/09/13 09:47:04 Using internal plugin for vagrant-cloud
2017/09/13 09:47:04 Using internal plugin for docker-import
2017/09/13 09:47:04 Using internal plugin for docker-push
2017/09/13 09:47:04 Using internal plugin for docker-tag
2017/09/13 09:47:04 Using internal plugin for googlecompute-export
2017/09/13 09:47:04 Using internal plugin for alicloud-import
2017/09/13 09:47:04 Using internal plugin for compress
2017/09/13 09:47:04 Using internal plugin for shell-local
2017/09/13 09:47:04 Using internal plugin for vagrant
2017/09/13 09:47:04 Using internal plugin for artifice
2017/09/13 09:47:04 Using internal plugin for atlas
2017/09/13 09:47:04 Using internal plugin for checksum
2017/09/13 09:47:04 Using internal plugin for docker-save
2017/09/13 09:47:04 Using internal plugin for manifest
2017/09/13 09:47:04 Using internal plugin for vsphere
2017/09/13 09:47:04 Detected home directory from env var: /home/ubuntu
2017/09/13 09:47:04 Attempting to open config file: /home/ubuntu/.packerconfig
2017/09/13 09:47:04 [WARN] Config file doesn't exist: /home/ubuntu/.packerconfig
2017/09/13 09:47:04 Detected home directory from env var: /home/ubuntu
1.0.1
Affected versions: Packer 1.1.0
Host platform: Ubuntu 16.04.3 LTS
Builder: lxc
Provisioner: ansible-local with Ansible v2.3.2.0
I see similar behavior with lxc builder.
lxc: Uploading Playbook directory to Ansible staging directory...
lxc: Creating directory: /tmp/packer-provisioner-ansible-local/59c5057d-3cdc-1df2-9db3-28234a4bfb7e
lxc: Uploading main Playbook file...
lxc: Uploading inventory file...
lxc: Uploading group_vars directory...
lxc: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/59c5057d-3cdc-1df2-9db3-28234a4bfb7e && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/59c5057d-3cdc-1df2-9db3-28234a4bfb7e/playbook.yml --extra-vars "packer_build_name=lxc packer_builder_type=lxc packer_http_addr=" --vault-password-file=/tmp/vault_pass -c local -i /tmp/packer-provisioner-ansible-local/59c5057d-3cdc-1df2-9db3-28234a4bfb7e/packer-provisioner-ansible-local317055402
lxc: [WARNING]: Could not match supplied host pattern, ignoring: all
lxc:
lxc: [WARNING]: provided hosts list is empty, only localhost is available
lxc:
lxc: PLAY [all] *********************************************************************
lxc: skipping: no hosts matched
lxc:
lxc: PLAY RECAP *********************************************************************
lxc:
I run packer with -debug
parameter, enter lxc container after stage you see above. Then I copied lines from Executing Ansible: and run inside container. The output was
PLAY [all] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************************************************************************
ok: [127.0.0.1]
(...)
PLAY RECAP *****************************************************************************************************************************************************************************************************
127.0.0.1 : ok=28 changed=18 unreachable=0 failed=0
This is how my ansible-local looks like
{
"type": "ansible-local"
"role_paths": [...],
"playbook_dir": "playbook",
"inventory_groups": "internal",
"group_vars": "group_vars",
"extra_arguments": [
"--vault-password-file=/tmp/vault_pass"
],
"playbook_file": "playbook.yml",
}
I tried without extra_arguments
but it made no difference. This looks like packer inappropriately execute ansible which is weird because taking exact options and running manually inside a container works.
I think I have found the cause of this.
When packer run is see the following output:
lxc: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/59ca6080-dae1-34d8-a41c-4171ee6d022d && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/59ca6080-dae1-34d8-a41c-4171ee6d022d/playbook.yml --extra-vars "packer_build_name=lxc packer_builder_type=lxc packer_http_addr=" --vault-password-file=/tmp/vault_pass -c local -i /tmp/packer-provisioner-ansible-local/59ca6080-dae1-34d8-a41c-4171ee6d022d/packer-provisioner-ansible-local245356334
Note the quotes after --extra-vars
. But when I watched processes inside container that is translated to
/bin/sh -c cd /tmp/packer-provisioner-ansible-local/59ca6080-dae1-34d8-a41c-4171ee6d022d && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/59ca6080-dae1-34d8-a41c-4171ee6d022d/playbook.yml --extra-vars packer_build_name=lxc packer_builder_type=lxc packer_http_addr= --vault-password-file=/tmp/vault_pass -c local -i /tmp/packer-provisioner-ansible-local/59ca6080-dae1-34d8-a41c-4171ee6d022d/packer-provisioner-ansible-local245356334
And the quotes are gone and ansible ignores everything after packer_http_addr=
I wonder if the issue is with &packer.RemoteCmd
where the quotes are dropped.
https://github.com/hashicorp/packer/blob/master/provisioner/ansible-local/provisioner.go#L313
Can someone upload a complete project that reproduces this issue? I'm afraid I'm not very familiar with ansible
Hi @mwhooker ,
Sorry for not replying earlier, I thought that author of this bug may want to provide something. I prepared dummy ansible role within two test cases. The first one will exit with error as it will not find vault password file that was passed in extra argument. The second one although finish with no errors but will not run any ansible tasks.
Let me know if that was helpful, or if you need anything else.
Thanks!
packer_5335.tar.gz
Can anyone verify that the fix in #5703 resolves this? Looks like it should, but want to make sure before merging
@mwhooker
Yes. This works for me. I can see now in ansible output quotes are escaped
--extra-vars \"packer_build_name=lxd packer_builder_type=lxd packer_http_addr=\"
and inside lxc/lxd container the quotes are also retained.
Thank you!
Sorry if I should have open an issue instead of adding a new comment on this one, but it doesn't work:
amazon-ebs: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/5a7b62a5-85ac-4ca0-cf05-3c861b95f991 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/5a7b62a5-85ac-4ca0-cf05-3c861b95f991/kubernetes-workers.yml --extra-vars \"packer_build_name=amazon-ebs packer_builder_type=amazon-ebs packer_http_addr=\" --extra-vars kubernetes_version=1.9.2 -c local -i /tmp/packer-provisioner-ansible-local/5a7b62a5-85ac-4ca0-cf05-3c861b95f991/packer-provisioner-ansible-local929948168
amazon-ebs: ERROR! the playbook: packer_builder_type=amazon-ebs could not be found
I tried on this master branch commit 735d2511c6fe7f785dda160072a437bc3cc1e5d1.
If I revert the fix, it works:
amazon-ebs: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/5a7b6507-84ea-d17f-d2b2-fe3e92148b10 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/5a7b6507-84ea-d17f-d2b2-fe3e92148b10/kubernetes-workers.yml --extra-vars "packer_build_name=amazon-ebs packer_builder_type=amazon-ebs packer_http_addr=" --extra-vars kubernetes_version=1.9.2 -c local -i /tmp/packer-provisioner-ansible-local/5a7b6507-84ea-d17f-d2b2-fe3e92148b10/packer-provisioner-ansible-local932383036
Same problem here.
This broke for me. Version 1.2.0. Downgrading to 1.1.3 work around the bug.
{
"builders": [
{
"type": "amazon-ebs",
"instance_type": "t2.micro",
"iam_instance_profile": "{{user `INSTANCE_PROFILE`}}",
"region": "{{user `REGION`}}",
"vpc_id": "{{user `VPC_ID`}}",
"subnet_id": "{{user `SUBNET_ID`}}",
"security_group_id": "{{user `SECURITY_GROUP_ID` }}",
"run_tags":{
"ImageType":"NGINX"
},
"run_volume_tags":{
"ImageType":"NGINX"
},
"source_ami_filter": {
"filters": {
"name": "{{user `AMI_SOURCE_NAME_PATTERN`}}"
},
"owners": "{{user `AMI_SOURCE_OWNER`}}",
"most_recent": true
},
"ssh_username": "{{user `USER`}}",
"ami_name": "{{user `AMI_NAME`}}-{{user `SOE_VER`}}-{{isotime \"20060102\"}}-{{timestamp}}",
"ami_description": "NGINX",
"ami_users": "{{user `AMI_USERS`}}",
"associate_public_ip_address": false,
"tags": {
"Name": "{{user `AMI_NAME`}}",
"ImageType": "NGINX"
},
"launch_block_device_mappings": [
{
"device_name": "/dev/xvdf",
"volume_size": 50,
"volume_type": "gp2",
"delete_on_termination": true
}
]
}
],
"provisioners": [
{
"type": "ansible-local",
"staging_directory": "/tmp/{{user `APPLICATION`}}",
"playbook_dir": "{{user `PACKER_HOME`}}/{{user `APPLICATION`}}",
"playbook_file": "{{user `PACKER_HOME`}}/{{user `APPLICATION`}}/playbook.yml",
"extra_arguments": ["-vvv", "--extra-vars \"@/tmp/{{user `APPLICATION`}}/vars.yml\""],
"role_paths": [
"{{user `ANSIBLE_HOME`}}/roles/ansible-role-one-partition",
"{{user `ANSIBLE_HOME`}}/roles/vff-ansible-role-nginx"
],
"clean_staging_directory": true,
"group_vars": "{{user `ANSIBLE_HOME`}}/roles/environment/inventory/group_vars/"
},
{
"type": "file",
"source": "{{user `PROJECT_HOME`}}/goss/{{user `APPLICATION`}}/goss.yaml",
"destination": "/home/ec2-user/"
},
{
"type": "shell",
"script": "{{user `PROJECT_HOME`}}/scripts/goss.sh",
"remote_folder": "/home/ec2-user",
"remote_file": "goss.sh"
}
],
"variables": {
"AMI_NAME": "{{user `AMI_NAME`}}",
"AMI_SOURCE_NAME_PATTERN": "{{user `AMI_SOURCE_NAME_PATTERN`}}",
"AMI_SOURCE_OWNER": "{{user `AMI_SOURCE_OWNER`}}",
"APPLICATION":"{{user `APPLICATION`}}",
"AMI_USERS": "{{user `AMI_USERS`}}",
"ANSIBLE_HOME": "{{user `ANSIBLE_HOME`}}",
"REGION": "{{user `REGION`}}",
"SOE_VER": "{{user `SOE_VER`}}",
"SUBNET_ID": "{{user `SUBNET_ID`}}",
"USER": "{{user `USER`}}",
"PROJECT_HOME": "{{user `PROJECT_HOME`}}"
}
}
output will be in this color.
==> amazon-ebs: Prevalidating AMI Name: VFF-NGINX-RHEL7.4-soe-0.1-20180215-1518676849
amazon-ebs: Found Image ID: ami-89ac57eb
==> amazon-ebs: Creating temporary keypair: packer_5a852b71-4374-f2b7-0a51-f7fb1cb6cb21
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
amazon-ebs: Adding tag: "Name": "Packer Builder"
amazon-ebs: Adding tag: "ImageType": "NGINX"
amazon-ebs: Adding tag: "ImageType": "NGINX"
amazon-ebs: Instance ID: i-070bea2ef42986f6d
==> amazon-ebs: Waiting for instance (i-070bea2ef42986f6d) to become ready...
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Provisioning with Ansible...
amazon-ebs: Uploading Playbook directory to Ansible staging directory...
amazon-ebs: Creating directory: /tmp/nginx
amazon-ebs: Uploading main Playbook file...
amazon-ebs: Uploading inventory file...
amazon-ebs: Uploading group_vars directory...
amazon-ebs: Creating directory: /tmp/nginx/group_vars
amazon-ebs: Uploading role directories...
amazon-ebs: Creating directory: /tmp/nginx/roles/ansible-role-one-partition
amazon-ebs: Creating directory: /tmp/nginx/roles/vff-ansible-role-nginx
amazon-ebs: Executing Ansible: cd /tmp/nginx && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/nginx/playbook.yml --extra-vars \"packer_build_name=amazon-ebs packer_builder_type=amazon-ebs packer_http_addr=\" -vvv --extra-vars "@/tmp/nginx/vars.yml" -c local -i /tmp/nginx/packer-provisioner-ansible-local595006632
amazon-ebs: ERROR! the playbook: packer_builder_type=amazon-ebs could not be found
amazon-ebs: ansible-playbook 2.4.2.0
amazon-ebs: config file = None
amazon-ebs: configured module search path = [u'/home/ec2-user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
amazon-ebs: ansible python module location = /usr/lib/python2.7/site-packages/ansible
amazon-ebs: executable location = /usr/bin/ansible-playbook
amazon-ebs: python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]
amazon-ebs: No config file found; using defaults
==> amazon-ebs: Terminating the source AWS instance...
!!
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' errored: Error executing Ansible: Non-zero exit status: 1
Yep, I have used version 1.2.0 for macos and getting the same issue. 1.1.3 - works fine.
Version 1.2.0 also broke in the same way on GCE builds, downgrading resolved
👍 if anyone still sees this issue after the 1.2.1 release, please let us know.
I've just tried with Packer 1.2.1 and Ansible 2.3.3.0, and the problem is still present for me. The output is:
==> base: Provisioning with Ansible...
base: Uploading Playbook directory to Ansible staging directory...
base: Creating directory: /tmp/packer-provisioner-ansible-local
base: Uploading main Playbook file...
base: Uploading inventory file...
base: Executing Ansible: cd /tmp/packer-provisioner-ansible-local && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_ROLES_PATH=/tmp/packer-provisioner-ansible-local/galaxy_roles ansible-playbook /tmp/packer-provisioner-ansible-local/base.yml --extra-vars "packer_build_name=base packer_builder_type=amazon-chroot packer_http_addr=" --extra-vars ec2_region=eu-west-1 --tags=install,package -c local -i /tmp/packer-provisioner-ansible-local/packer-provisioner-ansible-local377869661
base: [WARNING]: Host file not found: /etc/ansible/hosts
base:
base: [WARNING]: provided hosts list is empty, only localhost is available
base:
base:
base: PLAY [all] *********************************************************************
base: skipping: no hosts matched
@SteveTalbot, it might be a stupid idea, but did you tried without the tags?
@Erouan50: Every task in our playbook has tags associated, which means the playbook won't do anything without the tags. Not a stupid idea though. I wish I had thought to try that while I had the chance yesterday.
Maybe @lbytnar is onto something here, that there is something strange going on with the quotes. Running the command as displayed works:
cd /tmp/packer-provisioner-ansible-local && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_ROLES_PATH=/tmp/packer-provisioner-ansible-local/galaxy_roles ansible-playbook /tmp/packer-provisioner-ansible-local/base.yml --extra-vars "packer_build_name=base packer_builder_type=amazon-chroot packer_http_addr=" --extra-vars ec2_region=eu-west-1 --tags=install,package -c local -i /tmp/packer-provisioner-ansible-local/packer-provisioner-ansible-local377869661
But if you pass it through a shell, you need to escape the quotes:
bash -c "cd /tmp/packer-provisioner-ansible-local && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_ROLES_PATH=/tmp/packer-provisioner-ansible-local/galaxy_roles ansible-playbook /tmp/packer-provisioner-ansible-local/base.yml --extra-vars \"packer_build_name=base packer_builder_type=amazon-chroot packer_http_addr=\" --extra-vars ec2_region=eu-west-1 --tags=install,package -c local -i /tmp/packer-provisioner-ansible-local/packer-provisioner-ansible-local377869661"
Sorry I don't often have the opportunity to try things out to help debug this further.
Sounds like this is an escaping issue solved within the packer config, so closing again.
@SwampDragons: Do you know which version the escaping issue will be resolved in please?
Maybe I misunderstood... I thought your last comment was saying you believed this to be an escaping issue, solvable from your
"extra_arguments": ["-vvv", "--extra-vars \"@/tmp/{{user
APPLICATION}}/vars.yml\""],
line in your config.
@SwampDragons: The two examples I posted were to prove that the ansible-playbook command works when executed from the command line.
When executed from within Packer, the same command does not work, and the output we see from Ansible is as though the quotes have not been escaped correctly.
I can't reproduce; I was able to pass in --extra-vars without issue. I'm going to close this ticket since it seems that --extra-vars, the original source of pain, is working now. If you're still having issues with the ansible-local provisioner, I think we need to open a new issue with a new minimal repro case so we can track it separately from the issues that seem to have been fixed here.
"provisioners": [
{
"type": "ansible-local",
"playbook_file": "../ansible/playbook.yml",
"inventory_groups": "internal",
"extra_arguments": ["-vvv", "--extra-vars \"amazon_locale=us-east-1\""]
}
]
@SwampDragons: I notice the way --extra-vars is passed in your working example is different from the documentation, which suggests to use two separate arguments for the "--extra-vars" and the variable "amazon_locale=us-east-1". Which is preferred? Should it matter?
I'll try that style of argument-passing next time we run a build, but v1.2.1 is still failing for me when "--extra-vars" is used as per the documentation, and it works in v1.0.0.
"extra_arguments": ["-vvv", "--extra-vars", "amazon_locale=us-east-1"]
works for me, too.
I finally narrowed down the remaining issue to Ansible's behaviour with regard to "implicit localhost". I've raised a new ticket #6347 to request a small change to the Packer documentation to help prevent others falling into the same trap as I did.
I never did work out why the problem only appeared in the circumstances described in this ticket.
For anyone who has the same problem as me, a workaround to use the ansible-local provisioner with the amazon-chroot builder is described in #6347.
Thanks for the update! Glad you got it figured out.
@SwampDragons >=v1.2.1 does not work for me with amazon-chroot and ansible-local and extra_arguments passing extra-vars.
All the following incantations do not appear to actually provide the extra-var in scope to Ansible. A simple debug
trying to print the variable results in VARIABLE IS NOT DEFINED!
The following all work <= 1.2.0 and fail >= 1.2.1 . I've tried all the way up to 1.2.4 and all fail.
"extra_arguments": [ "--extra-vars", "\"application={{user `application`}}\""]
"extra_arguments": [ "--extra-vars", "application={{user `application`}}"]
"extra_arguments": [ "--extra-vars application={{user `application`}}"]
"extra_arguments": [ "--extra-vars \"application={{user `application`}}\""]
In all cases the Executing Ansible:
command printed to stdout look visually correct, yet the variable is not actually defined when referenced. Running the command printed to the screen manually works as expected.
I've tried with Ansible 2.2.1.0 and 2.6.1 in the chroot, no difference.
sample output @ 1.2.1
amazon-chroot: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/5b49f87a-c08b-65c4-ca65-4cbcfcb6a385 && /usr/local/bin/ansible-playbook -i foo, /tmp/packer-provisioner-ansible-local/5b49f87a-c08b-65c4-ca65-4cbcfcb6a385/post-pack.yml --extra-vars "packer_build_name=amazon-chroot packer_builder_type=amazon-chroot packer_http_addr=" --extra-vars "application=base" -c local -i /tmp/packer-provisioner-ansible-local/5b49f87a-c08b-65c4-ca65-4cbcfcb6a385/packer-provisioner-ansible-local799069697
amazon-chroot:
amazon-chroot: PLAY [localhost] ***************************************************************
amazon-chroot:
amazon-chroot: TASK [debug] *******************************************************************
amazon-chroot: ok: [localhost] => {
amazon-chroot: "application": "VARIABLE IS NOT DEFINED!"
amazon-chroot: }
When I run with 1.2.0 it works as expected.
amazon-chroot: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/5b49fd84-5039-f7e4-2115-75bcd16914b4 && /usr/local/bin/ansible-playbook -i foo, /tmp/packer-provisioner-ansible-local/5b49fd84-5039-f7e4-2115-75bcd16914b4/post-pack.yml --extra-vars \"packer_build_name=amazon-chroot packer_builder_type=amazon-chroot packer_http_addr=\" --extra-vars application=base --extra-vars envname=staging -c local -i /tmp/packer-provisioner-ansible-local/5b49fd84-5039-f7e4-2115-75bcd16914b4/packer-provisioner-ansible-local221737869
amazon-chroot:
amazon-chroot: PLAY [localhost] ***************************************************************
amazon-chroot:
amazon-chroot: TASK [debug] *******************************************************************
amazon-chroot: ok: [127.0.0.1] => {
amazon-chroot: "application": "base"
amazon-chroot: }
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Sorry if I should have open an issue instead of adding a new comment on this one, but it doesn't work:
I tried on this master branch commit 735d2511c6fe7f785dda160072a437bc3cc1e5d1.
If I revert the fix, it works: