Packer: Singleton provisioners

Created on 5 Apr 2018  ·  10Comments  ·  Source: hashicorp/packer

I'm dealing with a situation where I want Ansible to provision multiple builds at once.
I'd like to use Ansible's capabilities to target each host instead of running two different instances of ansible-playbook.
I propose we add a new boolean configuration option for all provisioners called "singleton".
Setting that setting to true will run a provisioner once instead of running it for all builders.

There are other legitimate use cases such as using the shell-local provisioner to download a file once or decompress a file that would be distributed to multiple hosts.

core enhancement

Most helpful comment

I don't want to dismiss the fact that this feature would be useful to some of our users, so I'm sorry if it seems like I'm being that way.

Just to clarify, I don't think it would be harmful to Packer by definition, just that it would change some really fundamental assumptions and I wouldn't be confident that it wouldn't cause serious unforeseen bugs. I also feel like most production provisioning scripts should be idempotent anyway, so if you're using the same ones there as you use against Packer, then hopefully you don't have to worry about reusing the same scripts multiple times.

Is there a provisioning problem that you are actually unable to solve because this feature doesn't exist? I know Ansible can take a little while in its "gathering facts" phase, so is this request coming from a place of build time frustration? I can see where this would be convenient ("I don't want to have to check whether local files already exist" for example) but it feels like, if time isn't a huge issue, then running the provisioner only once and having to handle the hosts list and connections yourself adds complexity to your build rather than removing it. I know Ansible and other config management tools are designed to target multiple hosts, but Ansible is also designed to be able to be run against just the hosts you want to run it against, giving you really granular control.

All of this said, if you want the feature, you are still welcome to fork Packer and implement it yourself. And if it turns out to be easy and doesn't break a lot of assumptions and I'm totally wrong, I'll gladly eat my words and merge.

All 10 comments

I'm not sure where to begin, if I were to make this change. Any help is appreciated.

I suspect you can hack this to work for you with the extant code by using the "only" option: https://www.packer.io/docs/templates/provisioners.html#run-on-specific-builds and just assigning it to run with a single one of your builds.

While that's entirely possible, a singleton option would be clearer.

Sure. I was j sharing with you that the tooling existed in case you didn't want to sink the effort into building a whole new feature 😃. If you do want to sink in that effort, take a look at how "only" is implemented and that should give you a path forward.

Specifying only does not resolve this issue since the other builder finishes instead of staying up.

Good point. I hadn't thought of that when giving you the advice... and I'm afraid that this means this feature isn't viable.

The parallel builds aren't supposed to talk to each other, and changing the core code to achieve this seems like a large, dangerous change for a small gain. I'm inclined to say this isn't something we'd merge even if you did implement it, though I'd like @mwhooker's opinion too. My feeling is that it's better to run your Ansible playbooks twice and make sure they and any shell-local scripts are idempotent for the machine running Packer.

:-/ I know this isn't the answer you want to hear, but I'm looking at this and thinking to myself that it's going to be a lot more trouble than benefit.

Thanks for the request. I can see where you're coming from, but I agree with @SwampDragons. We'd rather focus our efforts on other parts of the project, rather than on features which I think will have a limited user base. The right solution is definitely to ensure that your provisioner keeps track of how many times it's been run.

I'm not sure why this feature would be harmful to Packer.
I've seen this feature request as question before on StackOverflow so I'm not sure why you'd say there is little to be gained.

Most if not all configuration management tools target multiple hosts and this is in fact how they are used in production.

If we want to encourage users to reuse the same configuration management scripts in both development and production (and I believe we do) my proposal is the way to do so.

I don't want to dismiss the fact that this feature would be useful to some of our users, so I'm sorry if it seems like I'm being that way.

Just to clarify, I don't think it would be harmful to Packer by definition, just that it would change some really fundamental assumptions and I wouldn't be confident that it wouldn't cause serious unforeseen bugs. I also feel like most production provisioning scripts should be idempotent anyway, so if you're using the same ones there as you use against Packer, then hopefully you don't have to worry about reusing the same scripts multiple times.

Is there a provisioning problem that you are actually unable to solve because this feature doesn't exist? I know Ansible can take a little while in its "gathering facts" phase, so is this request coming from a place of build time frustration? I can see where this would be convenient ("I don't want to have to check whether local files already exist" for example) but it feels like, if time isn't a huge issue, then running the provisioner only once and having to handle the hosts list and connections yourself adds complexity to your build rather than removing it. I know Ansible and other config management tools are designed to target multiple hosts, but Ansible is also designed to be able to be run against just the hosts you want to run it against, giving you really granular control.

All of this said, if you want the feature, you are still welcome to fork Packer and implement it yourself. And if it turns out to be easy and doesn't break a lot of assumptions and I'm totally wrong, I'll gladly eat my words and merge.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings