Salt: [ERROR ] Salt request timed out. If this error persists, worker_threads may need to be increased.

Created on 29 Jun 2015  路  22Comments  路  Source: saltstack/salt

I don't understand what's going on here. When I try to target using grains I get the following error and it takes quite a while to time out.

[root@i-8d------ master.d]# time salt -G 'role:eam-ejcron' test.ping

[ERROR   ] Salt request timed out. If this error persists, worker_threads may need to be increased.

Failed to authenticate!  This is most likely because this user is not permitted to execute commands, but there is a small possibility that a disk error occurred (check disk/inode usage).

real    0m35.388s
user    0m0.751s
sys     0m0.258s
[root@i-8d------ master.d]# 

If I ping an instance that I know has that role by "name" it works, still pretty slow.

[root@i-8d------ master.d]# time  salt "i-30------" test.ping
i-30------:
    True
real    0m12.682s
user    0m0.808s
sys     0m0.268s
[root@i-8d------ master.d]# 

Verifying that the instance has the grain I think it should.

[root@i-8d------ master.d]# time salt 'i-30------' cmd.run 'grep role /etc/salt/grains'
i-30------
    role: eam-ejcron
real    0m5.138s
user    0m0.671s
sys     0m0.245s
[root@i-8d------ master.d]# time  salt 'i-30------' grains.get role             
i-30------:
    eam-ejcron
real    0m4.445s
user    0m0.772s
sys     0m0.330s
[root@i-8d------ master.d]#

Version of the saltmaster follows. It is an m3.medium in EC2.

[root@i-8d------ master.d]# salt --versions-report
         Salt: 2014.7.5
         Python: 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
         Jinja2: 2.2.1
         M2Crypto: 0.20.2
         msgpack-python: 0.1.13
         msgpack-pure: Not Installed
         pycrypto: 2.0.1
         libnacl: Not Installed
         PyYAML: 3.10
         ioflo: Not Installed
         PyZMQ: 14.3.1
         RAET: Not Installed
         ZMQ: 4.0.4
         Mako: Not Installed
[root@i-8d------ master.d]#

Plenty of free disk space and inodes.

[root@i-8d------ master.d]# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvde1      7.9G  4.6G  3.4G  58% /
[root@i-8d------ master.d]# df -i /
Filesystem     Inodes IUsed  IFree IUse% Mounted on
/dev/xvde1     524288 63094 461194   13% /
[root@i-8d------ master.d]#

worker_threads setting

[root@i-8d------ master.d]# grep worker_threads /etc/salt/master.d/jobs.conf
worker_threads: 16
[root@i-8d------ master.d]#
Bug Documentation P2 severity-low stale

Most helpful comment

@jfindlay, I used repo from tutorial without changing anything.
There were no messages in master's log at all as if salt command even didn't try to connect to master.

Chainging master's box to hashicorp/precise64 seems to resolved my issue...

All 22 comments

Oh, this seems vaguely related to https://github.com/saltstack/salt/issues/12248

@charleshbaker, thanks for the report. Do you have the same problems if you upgrade to 2015.5.2?

@charleshbaker, both of those should be fixed in 2015.5.2. If your experience is otherwise, I would like to know. :-)

@basepi Said the fix wouldn't be in until 2015.5.3. I didn't find it in the release notes for 2015.5.2 http://docs.saltstack.com/en/latest/topics/releases/2015.5.2.html.

It looks like you're right. 2015.5.3 should be coming out in a week or two.

WE have upgraded our masters to 2015.5.3 but the problem persists. Have not yet upgraded the minions.

[root@i-8dc54ba0 ~]# salt --versions-report
           Salt: 2015.5.3
         Python: 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
         Jinja2: 2.2.1
       M2Crypto: 0.20.2
 msgpack-python: 0.1.13
   msgpack-pure: Not Installed
       pycrypto: 2.0.1
        libnacl: Not Installed
         PyYAML: 3.10
          ioflo: Not Installed
          PyZMQ: 14.3.1
           RAET: Not Installed
            ZMQ: 4.0.4
           Mako: Not Installed
        Tornado: Not Installed
[root@i-8dc54ba0 ~]# service salt-master restart
Stopping salt-master daemon:                               [  OK  ]
Starting salt-master daemon:                               [  OK  ]
[root@i-8dc54ba0 ~]# salt -G "role:eam-ejcron" test.ping
Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased.
You have mail in /var/spool/mail/root
[root@i-8dc54ba0 ~]# salt -G "role: eam-ejcron" test.ping
Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased.
[root@i-8dc54ba0 ~]#

I should mention that we do have a multi master set up. The above behavior is seen on one master. I see different behavior from teh other master. Why is the master including itself in results of the test.ping? It doesn't match the role I'm looking for.

[root@i-2210edcc ~]#  salt -G "role: eam-ejcron" test.ping
i-2210edcc:
    Minion did not return. [Not connected]
[root@i-2210edcc ~]#  salt -G "role: eam-couchbase" test.ping
i-2210edcc:
    Minion did not return. [Not connected]
[root@i-2210edcc ~]# grep "^role" /etc/salt/grains
role: salt-master
You have mail in /var/spool/mail/root
[root@i-2210edcc ~]# salt --versions-report
           Salt: 2015.5.3
         Python: 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
         Jinja2: 2.2.1
       M2Crypto: 0.20.2
 msgpack-python: 0.1.13
   msgpack-pure: Not Installed
       pycrypto: 2.0.1
        libnacl: Not Installed
         PyYAML: 3.10
          ioflo: Not Installed
          PyZMQ: 14.3.1
           RAET: Not Installed
            ZMQ: 4.0.4
           Mako: Not Installed
        Tornado: Not Installed
[root@i-2210edcc ~]#

@charleshbaker, is it possible for you to confirm this bug in a single master setup? If that is too much work, it will be find if you can't. Thanks.

Also, what is the master debug log when this happens? Do you have this problem if you target the minions without using grains, glob, for example?

Same here with salt 2015.5.5 on fresh install of centos 7.1.
After starting salt-master and salt-minnion, I don't see any minion with salt-key...
I have to wait ~3 minutes to get a minion...
Then I try salt '*' test.ping (I have only one minion which is my salt-master)
and I got this error Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased.

I wait again ~3 minutes and all seems working :/
Salt-master seems to be in kind of while true loop ...
Really strange:/
I will post all stuff I will found.

Same issue with clean just installed "Getting started" tutorial:

root@saltmaster:/var/log/salt# salt '*' test.ping
Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased.

Vagrant provider is VirtualBox 5.0.20 on Win10 64bit. Master and minions successfuly pings each other.

root@saltmaster:/var/log/salt# salt --versions-report
Salt Version:
           Salt: 2016.3.0

Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: 1.5
          gitdb: 0.5.4
      gitpython: 0.3.2 RC1
          ioflo: Not Installed
         Jinja2: 2.7.2
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: 0.9.1
   msgpack-pure: Not Installed
 msgpack-python: 0.3.0
   mysql-python: 1.2.3
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.7.6 (default, Jun 22 2015, 17:58:13)
   python-gnupg: Not Installed
         PyYAML: 3.10
          PyZMQ: 14.0.1
           RAET: Not Installed
          smmap: 0.8.2
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.4

System Versions:
           dist: Ubuntu 14.04 trusty
        machine: x86_64
        release: 3.13.0-86-generic
         system: Linux
        version: Ubuntu 14.04 trusty

@Merlineus, can you post which get started tutorial you were using and/or the configs you have in place for the master and minion? Thanks.

@jfindlay, I used repo from tutorial without changing anything.
There were no messages in master's log at all as if salt command even didn't try to connect to master.

Chainging master's box to hashicorp/precise64 seems to resolved my issue...

@Merlineus, thanks for the extra information. This seems like something @UtahDave or @jacobhammons could fix up in the tutorial.

I have encountered the same problem. I have flakey responses from the minions following the most recent upgrade of the vagrant box ubuntu/trusty64. Using hashicorp/precise64 has solved the problem.

@Merlineus @stuartnelson3 editing the Vagrantfile to use hashicorp/precise64 fixed the problem for me as well, does anyone know why?

I tested the get started repo and was able to confirm the timeout issues reported by @Merlineus and others. Changing the bootstrap installation type from "stable" to "git v2016.3.0" seems to fix the issue. After this change the minions respond immediately.

@jfindlay Any idea what might be different between these two install types? Might help us narrow the issue.

For now I've submitted a PR to change the get started repo Vagrantfile so others avoid this issue.

@jacobhammons, currently, there should be no effective difference between stable and git v2016.3.0. Both should install 2016.3.0. The first installs from packages and the second from git+pip. I am not familiar enough with the vagrant install method used to say for sure, but that error seems to happen for many different reasons, so I can only recommend basic troubleshooting, like: are the daemons running, does the minion have the master configured correctly, etc.

Update regarding the get started vagrant repo:@UtahDave added additional ram to the VM profile, I tested using the latest version of ubuntu/trusty64 and that seems to have done the trick.

I have this problem!How to fix it ?

Even with the latest 2016.11.5, this problem is occuring.

ueda@vultr:/var/cache/salt/master$ sudo salt '*' test.ping
Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased.
ueda@vultr:/var/cache/salt/master$ salt --versions-report
Salt Version:
           Salt: 2016.11.5

Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: 2.2
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.7.3
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.9 (default, Jun 29 2016, 13:08:31)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 14.4.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.5

System Versions:
           dist: debian 8.8 
        machine: x86_64
        release: 3.16.0-4-amd64
         system: Linux
        version: debian 8.8 

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Was this page helpful?
0 / 5 - 0 ratings