I have a new minion using 2016.11.4 and anytime I try to do a state trying to pull an image from our repo it fails with a not authorized error.
It is fixed as soon as I do a dockerng.login from the master. So I believe there is just no call to login from the dockerng state code. Is this intended?
Just run any state trying to pull an image where dockerng.login has not been called before.
dockerng.image_present:
- name: xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/autodns:latest
- force: True
Return from a dockerng.pull (same error as when I call a state with a dockerng.image_present)
ERROR: Pull failed for 364535168108.dkr.ecr.us-east-1.amazonaws.com/autodns:latest. Error(s) follow:
unauthorized: authentication required
This is the expected behavior now.
For anything that needs a login, you need to login to it once. This change was made to make it so that credentials wouldn't be required for hub.docker.com and used the same credential path that docker login
on the command line uses.
https://github.com/saltstack/salt/pull/40480
Thanks,
Daniel
Yeah, this was in the release notes, but buried in the summary of changes under "Improved Docker auth handling and other misc. Docker improvements".
This is also documented here.
So going forward we will have to make sure and run a docker login on any minions before running a state with any docker states requiring authorization?
What is the recommended method of doing that for states especially if your login credentials expire on a time interval?
After updating salt-minion (apt history tells me 2016.11.3+ds-1, 2016.11.4+ds-1
) my highstate failed to pull the image requested.
My pillar data is configured to authenticate for the registry, just like the docs describe.
But it somehow fails to write the config.json file?
Aditional info:
The pillar setup:
docker-registries:
myregistry.com:5005:
password: mypassword
username: myuser
reauth: True
The state setup:
myregistry.com:5005/projecta/imageb:latest:
dockerng.image_present:
- force: True
Old debug output where it would still work:
[INFO ] Running state [myregistry.com:5005/projecta/imageb:latest] at time 12:51:45.518898
[INFO ] Executing state dockerng.image_present for myregistry.com:5005/projecta/imageb:latest
[DEBUG ] "GET /v1.26/images/json?only_ids=0&all=0 HTTP/1.1" 200 3
[DEBUG ] "GET /v1.26/images/json?only_ids=0&all=1 HTTP/1.1" 200 3
[DEBUG ] dockerng was unable to load credential from ~/.docker/config.json trying with pillar now ([Errno 2] No such file or directory: '/root/.docker/config.json')
[DEBUG ] Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
[DEBUG ] No config file found
[DEBUG ] Looking for auth entry for 'myregistry.com:5005'
[DEBUG ] No entry found
[DEBUG ] "POST /v1.26/auth HTTP/1.1" 200 48
[DEBUG ] Looking for auth config
[DEBUG ] Looking for auth entry for 'myregistry.com:5005'
[DEBUG ] Found 'myregistry.com:5005'
[DEBUG ] Found auth config
[DEBUG ] "POST /v1.26/images/create?tag=latest&fromImage=myregistry.com%3A5005%2Fprojecta%2Fimageb HTTP/1.1" 200 None
[DEBUG ] "GET /v1.26/images/myregistry.com:5005/projecta/imageb:latest/json HTTP/1.1" 200 None
[DEBUG ] "GET /v1.26/images/json?only_ids=0&all=0 HTTP/1.1" 200 423
[INFO ] {'Layers': {'Pulled': [u'3c3bea8240dd', u'9f54acfd1a5c', u'12a7970a6783', u'93ec42f15706', u'0a315b1fdf67', u'c6e759426204']}, 'Status': u'Downloaded newer image for myregistry.com:5005/projecta/imageb:latest', 'Time_Elapsed': 10.269906044006348}
[INFO ] Completed state [myregistry.com:5005/projecta/imageb:latest] at time 12:51:55.818511 duration_in_ms=10299.613
New debug output where it no longer works:
[INFO ] Running state [myregistry.com:5005/projecta/imageb:latest] at time 13:06:08.184217
[INFO ] Executing state dockerng.image_present for myregistry.com:5005/projecta/imageb:latest
[DEBUG ] Attempting to run docker-py's "images" function with args=() and kwargs={'all': False}
[DEBUG ] "GET /v1.26/images/json?only_ids=0&all=0 HTTP/1.1" 200 None
[DEBUG ] Attempting to run docker-py's "inspect_image" function with args=('myregistry.com:5005/projecta/imageb:latest',) and kwargs={}
[DEBUG ] "GET /v1.26/images/myregistry.com:5005/projecta/imageb:latest/json HTTP/1.1" 200 None
[DEBUG ] Attempting to run docker-py's "pull" function with args=('myregistry.com:5005/projecta/imageb',) and kwargs={'tag': 'latest', 'stream': True}
[DEBUG ] Looking for auth config
[DEBUG ] No auth config in memory - loading from filesystem
[DEBUG ] Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
[DEBUG ] No config file found
[DEBUG ] Looking for auth entry for 'myregistry.com:5005'
[DEBUG ] No entry found
[DEBUG ] No auth config found
[DEBUG ] "POST /v1.26/images/create?tag=latest&fromImage=myregistry.com%3A5005%2Fprojecta%2Fimageb HTTP/1.1" 500 127
[ERROR ] Encountered error pulling myregistry.com:5005/projecta/imageb:latest: Unable to perform pull: 500 Server Error: Internal Server Error for url: http+docker://localunixsocket/v1.26/images/create?tag=latest&fromImage=myregistry.com%3A5005%2Fprojecta%2Fimageb
[INFO ] Completed state [myregistry.com:5005/projecta/imageb:latest] at time 13:06:08.341713 duration_in_ms=157.495
Additional info:
1.13.1-0~ubuntu-xenial
~/.docker/config.json
, so it never wrote it?salt-call docker.login
completes, but does not solve the issuedocker login myregistry.com:5005
did fix it. Probably because this wrote my ~/.docker/config.json
salt-call pillar.items
is showing the correct data for authenticationI'm probably missing something obvious here. Please let me know what it is.
@nstapelbroek the new docker.login
function runs a docker login
on each of the configured registries. This should write the config.json. You said that the function "completes, but does not solve the issue". What was the output of that command? It should return a dictionary of the configured registries along with True/False results to note success or failure. If you don't see any registries in the output, then it is possible that the new docker.login
function is not properly pulling in all configured registries from the pillar data. Can you post the pillar configuration you are using for your registries, of course with username/password obfuscated? I'm interested in particular in the structure and name of the pillar variables.
Note: to test docker.login
you'll want to remove/rename the config.json to confirm that it gets written by the docker.login
function.
@tyhunt99
So going forward we will have to make sure and run a docker login on any minions before running a state with any docker states requiring authorization?
Yes
What is the recommended method of doing that for states especially if your login credentials expire on a time interval?
You can use the scheduler to tell the minion to re-auth on a given schedule.
@terminalmage
That worked perfectly. The run_on_start flag was exactly what I was looking for. Thank you!
I am going to close this now since it is expected behavior and I have resolved my issue.
@nstapelbroek Just FYI, you can still reply to this issue and we can continue to troubleshoot even though it has been closed.
@terminalmage Thanks for the heads up! Due to a national holiday the office is closed and I forgot to configure my personal ssh key on the server. I'll get back to you with the output of salt-call docker.login
tomorrow.
Cool, remember to also provide the pillar setup so I can see the structure of the pillar keys in which you've configured your credentials.
Good morning,
As promised, here is the debug output of my salt-call docker.login
and the Pillar data.
This is what happends if the ~/.docker/config.json file is present.
```
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/99-master-address.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/99-master-address.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/mysql.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/mysql.conf
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: server01.projecta.projects.companyname.com
[DEBUG ] Configuration file path: /etc/salt/minion
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/99-master-address.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/99-master-address.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/mysql.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/mysql.conf
[WARNING ] Unable to find IPv6 record for "server01.projecta.projects.companyname.com" causing a 10 second timeout when rendering grains. Set the dns or /etc/hosts for IPv6 to clear this.
[DEBUG ] Connecting to master. Attempt 1 of 1
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506')
[DEBUG ] Generated random reconnect delay between '1000ms' and '11000ms' (10522)
[DEBUG ] Setting zmq_reconnect_ivl to '10522ms'
[DEBUG ] Setting zmq_reconnect_ivl_max to '11000ms'
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506', 'clear')
[DEBUG ] Decrypting the current master AES key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] SaltEvent PUB socket URI: /var/run/salt/minion/minion_event_5160533104_pub.ipc
[DEBUG ] SaltEvent PULL socket URI: /var/run/salt/minion/minion_event_5160533104_pull.ipc
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/minion/minion_event_5160533104_pull.ipc
[DEBUG ] Sending event: tag = salt/auth/creds; data = {'_stamp': '2017-04-28T06:10:16.272415', 'creds': {'publish_port': 4505, 'aes': 'uupSfWlpgoTr/zchwDBt6IxNMkhkWBave4dFdTGrwTs=', 'master_uri': 'tcp://1.2.3.4:4506'}, 'key': ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506')}
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] Determining pillar cache
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506', 'aes')
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506')
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] LazyLoaded jinja.render
[DEBUG ] LazyLoaded yaml.render
[DEBUG ] LazyLoaded docker.login
[DEBUG ] LazyLoaded config.get
[DEBUG ] Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
[DEBUG ] Found file at path: /root/.docker/config.json
[DEBUG ] Found 'auths' section
[DEBUG ] Found entry (registry=u'myregistry.com:5005', username=u'myuser')
[DEBUG ] "GET /version HTTP/1.1" 200 221
[DEBUG ] Looking for auth entry for 'myregistry.com:5005'
[DEBUG ] Found 'myregistry.com:5005'
[DEBUG ] Looking for auth entry for 'docker.io'
[DEBUG ] No entry found
[DEBUG ] "POST /v1.26/auth HTTP/1.1" 200 48
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506', 'aes')
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506')
[DEBUG ] LazyLoaded nested.output
local:
----------
IdentityToken:
Status:
Login Succeeded
Here is my output while the ~/.docker/config.json file is missing
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/99-master-address.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/99-master-address.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/mysql.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/mysql.conf
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: server01.projecta.projects.companyname.com
[DEBUG ] Configuration file path: /etc/salt/minion
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/99-master-address.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/99-master-address.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG ] Including configuration from '/etc/salt/minion.d/mysql.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/mysql.conf
[WARNING ] Unable to find IPv6 record for "server01.projecta.projects.companyname.com" causing a 10 second timeout when rendering grains. Set the dns or /etc/hosts for IPv6 to clear this.
[DEBUG ] Connecting to master. Attempt 1 of 1
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506')
[DEBUG ] Generated random reconnect delay between '1000ms' and '11000ms' (8439)
[DEBUG ] Setting zmq_reconnect_ivl to '8439ms'
[DEBUG ] Setting zmq_reconnect_ivl_max to '11000ms'
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506', 'clear')
[DEBUG ] Decrypting the current master AES key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] SaltEvent PUB socket URI: /var/run/salt/minion/minion_event_5160533104_pub.ipc
[DEBUG ] SaltEvent PULL socket URI: /var/run/salt/minion/minion_event_5160533104_pull.ipc
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/minion/minion_event_5160533104_pull.ipc
[DEBUG ] Sending event: tag = salt/auth/creds; data = {'_stamp': '2017-04-28T06:15:42.799872', 'creds': {'publish_port': 4505, 'aes': 'uupSfWlpgoTr/zchwDBt6IxNMkhkWBave4dFdTGrwTs=', 'master_uri': 'tcp://1.2.3.4:4506'}, 'key': ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506')}
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] Determining pillar cache
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506', 'aes')
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506')
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] LazyLoaded jinja.render
[DEBUG ] LazyLoaded yaml.render
[DEBUG ] LazyLoaded docker.login
[DEBUG ] LazyLoaded config.get
[DEBUG ] Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
[DEBUG ] No config file found
[DEBUG ] "GET /version HTTP/1.1" 200 221
[DEBUG ] Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
[DEBUG ] No config file found
[DEBUG ] Looking for auth entry for 'myregistry.com:5005'
[DEBUG ] No entry found
[DEBUG ] "POST /v1.26/auth HTTP/1.1" 200 48
[DEBUG ] Looking for auth entry for 'docker.io'
[DEBUG ] No entry found
[DEBUG ] "POST /v1.26/auth HTTP/1.1" 200 48
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506', 'aes')
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'server01.projecta.projects.companyname.com', 'tcp://1.2.3.4:4506')
[DEBUG ] LazyLoaded nested.output
local:
----------
IdentityToken:
Status:
Login Succeeded
Here's the output of `salt-call pillar.items`, I've stripped out the pillar data unrelated to Docker.
If you need more I'd be happy to know.
local:
----------
projecta-docker:
----------
image:
myregistry.com:5005/projecta/imageb
tag:
latest
docker-pkg:
----------
lookup:
----------
config:
- DOCKER_OPTS="--bip=10.1.0.1/24 --dns 1.2.3.4 --dns 4.3.2.1"
pip:
----------
version:
== 8.1.1
process_signature:
/usr/bin/docker
version:
1.13.1-0~ubuntu-xenial
docker-registries:
----------
myregistry.com:5005:
----------
password:
mypassword
reauth:
True
username:
myuser
In a previous comment I've posted my state for pulling the image. This setup was a bit simplified. Below is the setup using the Pillar data. Not sure if it's related.
{%- set frontend_image = salt'pillar.get' %}
{%- set frontend_tag = salt'pillar.get' %}
{{ frontend_image }}:{{ frontend_tag }}:
dockerng.image_present:
- force: True
```
Thanks for you help so far.
@nstapelbroek Oh I think I see the problem. So, I've been doing a lot work recently for the upcoming feature release. In this release, we deprecate the legacy docker support and rename dockerng
to docker
. I originally made my auth fixes in this release branch and then later backported them to the 2016.3 and 2016.11 release branches. However, in doing so, I did not remember to change the documentation for the new login function. So, it all references docker.login
when it should reference dockerng.login
. When you ran docker.login
, you were running a function in the legacy docker module.
Can you try running dockerng.login
? I think this will work for you. In the meantime, I opened https://github.com/saltstack/salt/pull/40952 to correct the oversight in the documentation.
Also, just FYI, starting with the upcoming feature release, dockerng
and docker
will both refer to the same module and can be used interchangeably. Support for using dockerng
will be removed after a couple more release cycles.
Also, my apologies for not catching this detail sooner. Like I said, I've been working a lot on the upcoming release and so the fact that docker
== dockerng
in that release was burned into my brain, and I wasn't thinking of the fact that this module is still only called dockerng
in 2016.11 and earlier.
Using salt-call dockerng.login
worked.
Thanks for the help @terminalmage. And no worries, docker
is deprecated for a while now according to the docs. I should have been able realize to this on my own :see_no_evil:
Well, I can't blame you for not catching it when I didn't catch it myself :smile:
Thanks for helping to catch this issue with the documentation.
@terminalmage One more question. I am trying to verify everything is working as intended so I removed the .docker/config.json to ensure it gets recreated and updated properly. But even after deleting it I am still authorized and the logs are stating it found an auth entry. Is there a way to force my credentials to no longer exist for testing purposes?
Logs after deleting .docker/config.json on the minion:
2017-05-02 09:26:21,135 [salt.minion ][DEBUG ][22399] Command details {'tgt_type': 'glob', 'jid': '20170502092621099577', 'tgt': 'deathstar', 'ret': '', 'user': 'sudo_thunt', 'arg': ['364535168108.dkr.ecr.us-east-1.amazonaws.com/postgres:9.6.1'], 'fun': 'dockerng.pull'}
2017-05-02 09:26:21,135 [salt.minion ][TRACE ][22399] Started JIDs: ['20170502091528642872', '20170502091548836920', '20170502091556786031', '20170502091606787567', '20170502091649332130', '20170502091725164502', '20170502091831605756', '20170502091845959788', '20170502092007348353', '20170502092010892105', '20170502092152662834', '20170502092236481752']
2017-05-02 09:26:21,149 [salt.minion ][INFO ][9555] Starting a new job with PID 9555
2017-05-02 09:26:21,151 [salt.minion ][TRACE ][9555] Executors list ['direct_call.get']
2017-05-02 09:26:21,151 [salt.utils.lazy ][DEBUG ][9555] LazyLoaded direct_call.get
2017-05-02 09:26:21,152 [salt.loader.salt.petrode.com.int.module.dockerng][DEBUG ][9555] Attempting to run docker-py's "images" function with args=() and kwargs={'all': True}
2017-05-02 09:26:21,822 [requests.packages.urllib3.connectionpool][DEBUG ][9555] "GET /v1.24/images/json?only_ids=0&all=1 HTTP/1.1" 200 None
2017-05-02 09:26:21,928 [salt.loader.salt.petrode.com.int.module.dockerng][DEBUG ][9555] Attempting to run docker-py's "pull" function with args=('364535168108.dkr.ecr.us-east-1.amazonaws.com/postgres',) and kwargs={'tag': '9.6.1', 'stream': True}
2017-05-02 09:26:21,928 [docker.auth.auth ][DEBUG ][9555] Looking for auth config
2017-05-02 09:26:21,928 [docker.auth.auth ][DEBUG ][9555] Looking for auth entry for '364535168108.dkr.ecr.us-east-1.amazonaws.com'
2017-05-02 09:26:21,929 [docker.auth.auth ][DEBUG ][9555] Found u'https://364535168108.dkr.ecr.us-east-1.amazonaws.com'
2017-05-02 09:26:21,929 [docker.auth.auth ][DEBUG ][9555] Found auth config
2017-05-02 09:26:22,635 [requests.packages.urllib3.connectionpool][DEBUG ][9555] "POST /v1.24/images/create?tag=9.6.1&fromImage=364535168108.dkr.ecr.us-east-1.amazonaws.com%2Fpostgres HTTP/1.1" 200 None
2017-05-02 09:26:22,639 [salt.loader.salt.petrode.com.int.module.dockerng][DEBUG ][9555] Attempting to run docker-py's "inspect_image" function with args=('364535168108.dkr.ecr.us-east-1.amazonaws.com/postgres:9.6.1',) and kwargs={}
2017-05-02 09:26:22,641 [requests.packages.urllib3.connectionpool][DEBUG ][9555] "GET /v1.24/images/364535168108.dkr.ecr.us-east-1.amazonaws.com/postgres:9.6.1/json HTTP/1.1" 200 None
2017-05-02 09:26:22,642 [salt.minion ][DEBUG ][9555] Minion return retry timer set to 7 seconds (randomized)
2017-05-02 09:26:22,642 [salt.minion ][INFO ][9555] Returning information for job: 20170502092621099577
2017-05-02 09:26:22,642 [salt.transport.zeromq][DEBUG ][9555] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'deathstar', 'tcp://52.7.96.26:4506', 'aes')
2017-05-02 09:26:22,643 [salt.crypt ][DEBUG ][9555] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'deathstar', 'tcp://52.7.96.26:4506')
Also on a another note sometimes even after running dockerng.login and it returns True, doing a dockerng.pull will still claim invalid authorization for some time and then seemingly start working after some arbitrary time.
Output from dockerng.login:
deathstar:
----------
Errors:
Results:
----------
https://364535168108.dkr.ecr.us-east-1.amazonaws.com:
True
Edit: Found the issue, but I am not sure how to resolve it. Apparently docker-py will read in the auth config file and store it in memory. Can you suggest a good way to handle this? My current method is to simply restart the salt-minion service when running a docker login, but this isn't ideal.
@tyhunt99 There's no real workaround right now since this is a docker-py issue. I took a look at the docker-py source and opened https://github.com/docker/docker-py/pull/1586 to introduce an argument a function to refresh the credentials from the config.json. Once that's merged in, we will be able to get https://github.com/saltstack/salt/pull/41011 merged into Salt, which will force docker-py to reload the config.json when it needs to auth.
The upstream PR in docker-py has been merged and looks like it will be released in docker-py 2.3.0. #41011 may miss 2016.11.5 but will definitely be in the next feature release.
@terminalmage Great thank you!
I also hit this today, and spent a few hours trying to work it out - the change of docker.login
to dockerng.login
worked however, thanks.
FYI, the docs need to be updated here: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.dockerng.html#authentication
@elsmorian As I mentioned in this comment, I opened https://github.com/saltstack/salt/pull/40952 to fix the docs. I'm working with our team here to try to get the docs rebuilt and pushed so that the website reflects the fix.
@terminalmage Aha, sorry I missed that, hope they are updated soon :)
The docs are now updated. We found and fixed an issue with our automated docs build and publish setup.
Most helpful comment
The upstream PR in docker-py has been merged and looks like it will be released in docker-py 2.3.0. #41011 may miss 2016.11.5 but will definitely be in the next feature release.