Since updating to 2016.3.1 (from 2015.8.8), I've started seeing _a lot_ of this in the master log on my development setup (2 Centos vagrant boxes):
2016-07-27 16:30:01,034 [salt.transport.ipc][ERROR ][2378] Exception occurred while handling stream: [Errno 0] Success
It seems related to publish.runner, I can reproduce with salt-call publish.runner manage.up. Everything apparently works, but when there is a lot of publishing going on it leads to quite a high volume of log garbage.
# salt-master --versions-report
Salt Version:
Salt: 2016.3.1
Dependency Versions:
cffi: Not Installed
cherrypy: Not Installed
dateutil: Not Installed
gitdb: 0.5.4
gitpython: 0.3.2 RC1
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: 0.21.1
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.7
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pygit2: Not Installed
Python: 2.7.5 (default, Nov 20 2015, 02:00:19)
python-gnupg: Not Installed
PyYAML: 3.10
PyZMQ: 14.3.1
RAET: Not Installed
smmap: 0.8.1
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 3.2.5
System Versions:
dist: centos 7.2.1511 Core
machine: x86_64
release: 3.10.0-327.el7.x86_64
system: Linux
version: CentOS Linux 7.2.1511 Core
@carlpett are you using salt-api at all in your setup. If so I also have a bug similar to this here: #34460
@Ch3LL No, we're not using salt-api atm. Maybe the api uses publishing under the hood, though?
Seeing the same thing on my salt-master. Thinking this might be contributing to the higher CPU utilization I am seeing from a 2015.8 server.
# salt-master --versions-report
Salt Version:
Salt: 2016.11.2
Dependency Versions:
cffi: Not Installed
cherrypy: 3.5.0
dateutil: 2.6.0
gitdb: 0.6.4
gitpython: 1.0.1
ioflo: Not Installed
Jinja2: 2.8
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.3
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pygit2: Not Installed
Python: 2.7.12 (default, Nov 19 2016, 06:48:10)
python-gnupg: 0.3.8
PyYAML: 3.11
PyZMQ: 15.2.0
RAET: Not Installed
smmap: 0.9.0
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: Ubuntu 16.04 xenial
machine: x86_64
release: 4.4.0-66-generic
system: Linux
version: Ubuntu 16.04 xenial
Not sure if this is related, but I have one minion that has filled up its disk due to this log message in /var/log/salt/minion
2017-03-19 06:47:04,973 [salt.transport.ipc][ERROR ][19408] Exception occurred while Subscriber handling stream: Already reading
That message is hitting the minion log about 10 times per millisecond ( yes millisecond).
Hi, exactly the same thing as @twellspring : 6 per milliseconds and a crash due to filled up disk. Do not really know what could cause it as the minion was idle.
Can you guys give this fix a try: https://github.com/saltstack/salt/pull/41409
@Ch3LL Hi, I cannot reproduce the bug, so I cannot evaluate the patch, sorry. @carlpett @twellspring can you ?
Ran into this issue on 2016.11.1 minions and #41409 seems to have resolved the above Exception occurred while handling stream: [Errno 0] Success errors for us.
@Ch3LL Is this fix for the minion issue or for both minion/master?
If minion, I do not have any salt minions experiencing the problem so can not test the fix.
@Ch3LL Been testing this out on my salt-master v2016.11.5 and after putting the patch on I have not seen any more of the error message
Exception occurred while handling stream: [Errno 0] Success
Most helpful comment
@Ch3LL Been testing this out on my salt-master v2016.11.5 and after putting the patch on I have not seen any more of the error message
Exception occurred while handling stream: [Errno 0] Success