At a high level, there are two 'simple' use cases:
However, the current model proposes complications.
ansible_user is encoded in the credential, and passed via -u to ansible-playbook. This causes an issue with multiple credentials in a playbook run due to conflicts.Here Be Dragons.
Maybe we're doing this wrong but we find ourselves writing playbooks that talk to multiple platforms requiring different credentials to log in. For example, I have a backup playbook that logs into the BIG-IPs in production and creates a backup. Then either that playbook needs to scp from BIG-IP to the backup server, or in a separate play or role, the backup server needs to log into the BIG-IP to store the backup. Either way, I need both the BIG-IP and backup server credentials in the same playbook. Unfortunately, both platforms require a different user and password.
To work around Tower's current behavior, we will typically write the playbook using the BIG-IP inventory and BIG-IP Tower credentials, and then within the playbook at run-time, we perform a separate lookup to get the backup server account credentials and store it in a variable. This is brittle and requires side-band resources to execute the playbook successfully. Further, it disallows us from using ansible modules like 'copy', and we instead have to run 'shell' and scp. Our previous approach was to store the credentials in a vault file, but that complicated SCM of our playbook source code any time the password or the vault password changed because this vault file had to be in many many playbooks.
This is a common patter in our environment and we see no way around it due to limitations with network appliances like BIG-IP, Cisco, Arbor, etc. Many devices in our network don't auth like a Linux server and having the same system account on every device doesn't work well for us at this time.
Regarding this RFE, what about a method similar to a 'survey' or vars_prompt. What if Tower's credential subsystem simply allowed for arbitrary key/value pairs, where the "key/value" pair is stored within Tower's Credential subsystem encrypting the "value" like everything else and exposing the key/value at run-time if the credential is associated with the Job Template.
Tower's Credential screen would have something like the following in addition to the default screen values:
Name: My Compound Credentials
Description: Collection of credentials
Organization: Default
Type: Machine
Type Details
Additional Parameters:
Key:Value
machine1_pass: "somethingsecret"
machine2_key: "my private key string"
mysshkeypass: "somekeypasswd"
While it won't solve all of the challenges, it would allow us to keep credentials in one place. Eventually, core modules could be extended to include user/pass fields. This would allow modules like 'copy' to use a different credential set as needed.
... how is this different than the existing custom credential type support? See https://www.ansible.com/blog/ansible-tower-feature-spotlight-custom-credentials and http://docs.ansible.com/ansible-tower/latest/html/userguide/credential_types.html
We're in the process of setting up Ansible for our entire infrastructure in AWS.
We have a number of legacy Ubuntu servers where the default username is ubuntu.
We also have a good chunk of newer Debian based servers, where the default username is admin.
There's a small number of other legacy servers with various other user setups.
Finally the servers are launched with a plethora of differing base private keys.
Currently with the "only one SSH key per template" restriction, it's troublesome to get AWX/Ansible access to all of the servers. I'm currently settling on the option of making a simple "re-key" playbook, that will create an ansible user, add a shared private key, and enable sudo privileges. Then I'm going to duplicate the template for every credential combination in our infrastructure, and make sure that each template only run on the correct subset of instances.
Overall this ansible account is useful to streamline the configuration of the servers, and also useful for auditing purposes, such that I can see that some command was run by ansible.
On the other hand, the current option is cumbersome, as long as we're running more than a single linux distribution, as each distribution have differing conventions for default usernames.
I would like to have the ability to assign machine credential by inventory group as well
Another use case to support is using WinRm and SSH connection in the same playbook / template. Currently both SSH and Winrm credentials represent with the same "Machine" type which makes it impossible to use both in the same run.
Is this feature planned to be released soon? A number of enterprises with rigid governance policies do not allow same credentials across multiple hosts.
It is not currently being worked, when it is, more information will be placed here.
This is really useful feature in case clients are in multi domain environment so multiple creds can be used in job templates.
very interesting on this as well .. we are a large enterprise customer planning to host +100K endpoints thru AT.
This feature is crucial for our project.
I'd sell my soul to see this feature implemented!!!!
@wenottingham can you please check if what I ask in ansible/ansible-runner#51 makes sense? It should be easy to setup, but need some kind of confirmation before proceeding.
Can you create this feature by giving option to include a credential with a host in inventory itself. I think that will be much simplified and highly distributive.
we also would sell our soul for this feature. this makes using AWX at a large scale difficult, as even if you can use the same credential for more than 1 host you more than likely can't use that same credential to target 3000+ servers at a single time, thus making AWX unusable.
One partial work around to this is to create custom credentials and then shim the path into ansible_ssh_private_key. It's extremely unpleasant, and it does not scale, but this is what it looks like:
In AWX, you create a custom credential that will accept your key value and set an env with the path:
Input cofiguration
fields:
- id: switch_ssh_private_key
type: string
label: Switch SSH Private Key
format: ssh_private_key
secret: true
multiline: true
required:
- switch_ssh_private_key
Injector configuration:
env:
AWX_SWITCH_SSH_KEY_FILE: '{{ tower.filename.switch_ssh_key_file }}'
file:
template.switch_ssh_key_file: '{{ switch_ssh_private_key }}'
Then you create a credential of that type and input your private key. After that, you can attach it to your job template. You can do this for as many keys as you want/need, but you'll need to change the var names. E.g., Changing the word switch above to something else that's unique to each key you create.
After that, you can do gymnastics to use them in tasks/plays:
- name: Try to grab a custom ssh key file for the switch
run_once: True
set_fact:
pm_custom_switch_ssh_key_file: "{{ lookup('env','AWX_SWITCH_SSH_KEY_FILE') }}"
- name: Chmod the tmp key
run_once: True
delegate_to: localhost
file:
path: "{{ pm_custom_switch_ssh_key_file }}"
mode: 0600
and
- name: Add switch to inventory
add_host:
name: "{{ switch_name }}"
ansible_ssh_host: "{{ switch_name }}"
ansible_ssh_private_key_file: "{{ pm_custom_switch_ssh_key_file }}"
or
- name: Add switch to inventory
set_fact:
ansible_ssh_private_key_file: "{{ pm_custom_switch_ssh_key_file }}"
With this, it becomes possible to create a dictionary to reference the right env value for the host you want based on some variable in group_vars etc.
I agree with @mkkeffeler, this feature would be extremely useful for Windows environments as well - especially since Kerberos/AD auth is far less flexible than SSH.
I would've sold my soul in July, and i'd still sell it now.
Most helpful comment
I'd sell my soul to see this feature implemented!!!!