How do I use Salt to securely copy a sensitive file (a cryptographic key) from one specific minion to another specific minion? I don't want any other minion to be able to read the file.
The Salt Mine seems to be a logical place to start, but the documentation says:
The Salt Mine is used to collect arbitrary data from Minions and store it on the Master. This data is then made available to all Minions via the
salt.modules.mine
module.
I don't want the data to be made available to all minions, just one. In addition I don't need the periodic refresh—I only need the file to be read whenever I run state.highstate
for the destination minion. And I don't know whether Salt Mine uses a secure transport.
cp.push
?Salt's cp.push
function seems like a good way to get the file to the master, except:
salt.transport.Channel.send()
method which is not guaranteed to be confidentialcp.push
global read permissions in the master's file systemI could write a custom external pillar that somehow reads the file from the source minion and then makes the file's contents available via a pillar to a second minion. How would I get the master to securely fetch the file's contents from the source minion?
@rhansen, good question and thanks for asking. The canonical way to store secret data is in pillar. You could salt a key file with pillar in this way:
key:
file.managed:
- name: /etc/app/secret.key
- contents_pillar: app:secret_key
Where the :
character is used to scope into the pillar namespace you have setup for your pillar data. See also https://docs.saltstack.com/en/latest/faq.html#is-it-possible-to-deploy-a-file-to-a-specific-minion-without-other-minions-having-access-to-it.
@jfindlay: Thank you for your answer. I do know how to use pillars to securely create files.
The problem is that pillars only work if the data is known to the master. In my scenario, the sensitive file is programmatically generated (and periodically regenerated) on one of the minions, not the master. Thus, in order to share the data via a pillar, the master must first fetch the data from the minion. Securely fetching the data from the minion (via Salt) is the part that eludes me.
@rhansen, you could use salt-ssh -r
to copy the file to the master, or create a limited ssh user account on the master to securely receive files.
@rhansen, you could also create a secure git repo on that minion that serves as a git pillar for the master. That way you can let git handle the transport and the security issues.
@rhansen, you could use
salt-ssh -r
to copy the file to the master, or create a limited ssh user account on the master to securely receive files.@rhansen, you could also create a secure git repo on that minion that serves as a git pillar for the master. That way you can let git handle the transport and the security issues.
All of these options require extra manual setup steps on both the master and the minion. Given that the master and minions already have each other's Salt keys, and there is a salt.transport.Channel.crypted_transfer_decode_dictentry()
method designed to securely move data, I was hoping there was already some high-level mechanism to securely fetch data from a minion. It sounds like the answer is no.
I would like to add this functionality. Is there any documentation describing the API used to communicate with a minion?
@rhansen, your assessment seems valid to me. I am not very familiar with that part of the code base. @cachedout may have more/better advice.
Hi @rhansen
Your assessment thus far is correct. With pillars, the master replies with the pillar after having encrypted it using the minion's key, thus guaranteeing confidentiality. However, other operations on the REQ channel use the shared AES secret as you've already noted.
One thing we could do here would be to have cp.push
encrypt the data using the minion's key prior to sending it and then instruct the master to decrypt it on receipt. At a minimum, we'd have to modify the push
function to do the encryption and then we'd have to modify the master's _file_recv
method in AESFuncs
to do the decryption and write it out securely.
Does that give you a reasonable start? Let me know what you think.
I'm not at all familiar with the transport code, but I was under the impression that crypted_transfer_decode_dictentry()
was a wrapper around send()
that magically took care of the extra crypto work (perhaps with some additional setup required). Is that not the case?
I was hoping that I could simply replace cp.push
's use of send()
with crypted_transfer_decode_dictentry()
and be mostly done...
Is there any progress here? I am currently trying to do the same thing and like @rhansen I haven't found a dynamic[1] and secure way to achieve this.
[1] I have a ha setup of three nodes, one node is holding a CA (generated using salts x509 module) and a generated server certificate. I need to copy the server cert and key to the two other nodes.
@crapworks Did you figure out a good way to do this? We are in exactly the same situation as you are (certitifates being generated on one box behind a LB) and I don't necessarily want to send this data around unprotected.
I'm guessing Let's Encrypt is making this a common use case. eg. a web server generates a certificate, which then also needs to be distributed to a separate servers for MTA, XMPP, etc. which don't have public ports available to generate certificates for themselves. That's the use case I keep seeing appear.
Let's Encrypt would be my usecase, yes. I've for now solved it with a sshfs link between the minion and the master, but I'm decidedly non-enthusiastic about that.
@cachedout Is this now fixed (I didn't notice a PR)? Or is the ssfhs link from minions to the master going to be the officially recommended solution?
At present we don't have an officially-sanctioned solution for this but since this was originally opened as a question which has been answered quite likely as good as it will be for the time being, which is why I closed it. We can still continue the discussion of course for those who come across this issue but because there's no immediate action that can be taken, it's taking a little overhead to track. We're just trying to be a little more judicious about which issues we keep open so that we can try to find a little more focus. That said, if you do feel like we should re-open this I am happy to do so. :]
Thanks. I don't mind how it is managed from a project perspective (if it's preferable for it to be marked as closed or not), but I think it would be good to somehow show that the use case is acknowledged as something valid which Salt should aim to support in future, and (hopefully) have progress on that tracked.
Maybe this can be kept closed since it's a rather general question, but a new issue with specifics of a proposed implementation could be opened instead with a reference here, once those specifics have been established? That's my suggestion, but whatever works. In any case, thanks for clarifying.
@boltronics Yes, I think that's the right way to go here. I absolutely agree that the use case is valid.
If an issue request gets opened and it gets some development traction, that's the ideal scenario. At the same time, if that same feature request gets opened and doesn't see any activity for a long time (many months or years), we'll likely close it as well.
The point there being that if we're tracking an issue that gets picked up and has a lot of support and/or active development or is an issue that's being tracked for an upcoming release, that's perfect but we're also saying that having issues stay open for a very long time with no activity doesn't really serve anybody. People can still find and comment on (or +1) closed issues and they can be re-opened as needed.
I'm really sympathetic to the idea that this can be a little harsh but overall, my thinking here is that it just doesn't do anybody any good to have an issue that stays open for ages without any real plan to address it.
Happy to hear feedback on this of course and thanks!
Though certainly not an encouraged practice, i've (1) transfered data from
minion to master using only the event bus.
There are some other possibilities I haven't explored, (2) like having the
minion write directly to an external pillar, or ,(3)writing the data
locally on the minion to a location/system that will be cached on the
master.
Just food for thought :)
On Fri, Mar 24, 2017, 12:12 Mike Place notifications@github.com wrote:
@boltronics https://github.com/boltronics Yes, I think that's the right
way to go here. I absolutely agree that the use case is valid.If an issue request gets open and it gets some development traction,
that's the ideal scenario. At the same time, if that same feature request
gets open and doesn't see any activity for a long time (many months or
years), we'll likely close it as well.The point there being that if we're tracking an issue that gets picked up
and has a lot of support and/or active development or is an issue that's
being tracked for an upcoming release, that's perfect but we're also saying
that having issues stay open for a very long time with no activity doesn't
really serve anybody. People can still find and comment on (or +1) closed
issues and they can be re-opened as needed.I'm really sympathetic to the idea that this can be a little harsh but
overall, my thinking here is that it just doesn't do anybody any good to
have an issue that stays open for ages without any real plan to address it.Happy to hear feedback on this of course and thanks!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/saltstack/salt/issues/31863#issuecomment-289068180,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAO2R7j_zjTfBh0CvZL2M6nOEyjinAtBks5ro-uFgaJpZM4Hvwzf
.
One more recommendation, we've transferred data like SSL keys between hosts using the new vault modules. One host can write to a vault endpoint, the other can read. With this setup, communication is fully encrypted and we can control with vault policies (also managed via salt) what hosts have access to what data.
Though certainly not an encouraged practice, i've (1) transfered data from minion to master using only the event bus.
Unfortunately, the event bus also uses the .send()
method, and is no more secure than cp.push
.
new vault modules
I'm assuming you're talking about a Hashicorp Vault, which is, again, additional software. While Hashicorp Vaults are good solution to some of these problems, it doesn't really address the fundamental problem with .send()
.
I'm guessing Let's Encrypt is making this a common use case.
Yep, that would be my case too. I wanted to write a salt runner that I could execute manually if wanted, that would sync /etc/letsencrypt to the replica machine(s)...
Most helpful comment
Is there any progress here? I am currently trying to do the same thing and like @rhansen I haven't found a dynamic[1] and secure way to achieve this.
[1] I have a ha setup of three nodes, one node is holding a CA (generated using salts x509 module) and a generated server certificate. I need to copy the server cert and key to the two other nodes.