For many devices, I see this error continuously appearing in the minion proxy logs:
Exception argument of type 'bool' is not iterable occurred in scheduled job
I am using a elastic search returner for all salt logs and it is getting clogged with this same message too many times.
It happens with both Cisco and Juniper device.
Any idea what could be the issue?
Pillar file is pretty simple:
proxy:
proxytype: napalm
driver: ios
host: 1.2.4.5
username: *****
password: *****
Log:
[ERROR ] Exception argument of type 'bool' is not iterable occurred in scheduled job
[ERROR ] Exception argument of type 'bool' is not iterable occurred in scheduled job
[DEBUG ] Sending global request "[email protected]"
[DEBUG ] Global request denied.
[ERROR ] Exception argument of type 'bool' is not iterable occurred in scheduled job
[ERROR ] Exception argument of type 'bool' is not iterable occurred in scheduled job
[ERROR ] Exception argument of type 'bool' is not iterable occurred in scheduled job
[ERROR ] Exception argument of type 'bool' is not iterable occurred in scheduled job
[ERROR ] Exception argument of type 'bool' is not iterable occurred in scheduled job
[DEBUG ] Sending global request "[email protected]"
[DEBUG ] Global request denied.
[ERROR ] Exception argument of type 'bool' is not iterable occurred in scheduled job
[ERROR ] Exception argument of type 'bool' is not iterable occurred in scheduled job
Salt Version:
Salt: 2018.3.0
Dependency Versions:
cffi: 1.11.5
cherrypy: Not Installed
dateutil: 2.4.2
docker-py: Not Installed
gitdb: 0.6.4
gitpython: 1.0.1
ioflo: Not Installed
Jinja2: 2.8
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.3
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: 2.18
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.12 (default, Dec 4 2017, 14:50:18)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.2.0
RAET: Not Installed
smmap: 0.9.0
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: Ubuntu 16.04 xenial
locale: UTF-8
machine: x86_64
release: 4.4.0-116-generic
system: Linux
version: Ubuntu 16.04 xenial
Can you set the log_level: debug in /etc/salt/proxy, and restart the minion so that the log level is debug, and then you should see a stack trace.
Also, any other configs you have set would be helpful in replicating this.
Thanks,
Daniel
Hi Daniel,
I get the same log with the proxy running in debug mode:
2018-05-18 04:13:08,387 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:09,387 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:09,769 [paramiko.transport:1687][DEBUG ][7543] Sending global request "[email protected]"
2018-05-18 04:13:09,991 [paramiko.transport:1687][DEBUG ][7543] Global request denied.
2018-05-18 04:13:10,388 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:11,388 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:12,388 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:13,387 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:14,387 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:14,801 [paramiko.transport:1687][DEBUG ][7543] Sending global request "[email protected]"
2018-05-18 04:13:15,023 [paramiko.transport:1687][DEBUG ][7543] Global request denied.
2018-05-18 04:13:15,388 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:16,388 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:17,388 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
2018-05-18 04:13:18,387 [salt.minion :433 ][ERROR ][7543] Exception argument of type 'bool' is not iterable occurred in scheduled job
I have not added anything else apart from that simple pillar file I gave above.
The device model is Cisco catalyst 4510R+E
Can you switch it to log_level: trace because it should give you a full stacktrace somewhere.
This is what I see:
2018-05-20 22:45:57,392 [salt.utils.schedule:35 ][TRACE ][32734] ====
evaluating schedule now None =====
2018-05-20 22:45:57,392 [salt.minion :433 ][ERROR ][32734] Exception
argument of type 'bool' is not iterable occurred in scheduled job
2018-05-20 22:45:58,392 [salt.utils.schedule:35 ][TRACE ][32734] ====
evaluating schedule now None =====
2018-05-20 22:45:58,393 [salt.utils.schedule:1375][INFO ][32734] Running
scheduled job: __proxy_keepalive
2018-05-20 22:45:58,393 [salt.utils.schedule:1379][DEBUG ][32734]
schedule: This job was scheduled with jid_include, adding to cache
(jid_include defaults to True)
2018-05-20 22:45:58,394 [salt.utils.schedule:1384][DEBUG ][32734]
schedule: This job was scheduled with a max number of 1
2018-05-20 22:45:58,394 [salt.minion :433 ][ERROR ][32734] Exception
argument of type 'bool' is not iterable occurred in scheduled job
On Fri, May 18, 2018 at 6:13 PM, Daniel Wallace notifications@github.com
wrote:
Can you switch it to log_level: trace because it should give you a full
stacktrace somewhere.—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/saltstack/salt/issues/47695#issuecomment-390194804,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AiST_zad27Q6i0cewScl7JU3GLQa522yks5tzsHXgaJpZM4UCasQ
.
I cannot find it right now, but I have the feeling that I have seen a PR fixing this a couple of weeks ago, and should be solved in 2018.3.1. If I'm not complely wrong, @garethgreenaway might have submitted that PR to fix this.
Hey @mirceaulinic
Is there any fix for this issue that I can apply before 2018.3.1 is released?
I have same problem with ver 2018.3.2.
I also see the problem in 2018.3.2. I can provide further information if needed.
Same problem with 2018.3.2. I'll try to hack the code a bit and let you know if I discover smth relevant.
The problem seems to be from a job that gets installed called "enabled". It can be seen when running the schedule.list execution module against an affected proxy minion.
mah_router01:
schedule:
__mine_interval:
enabled: true
function: mine.update
jid_include: true
maxrunning: 2
minutes: 60
return_job: false
__proxy_keepalive:
enabled: true
function: status.proxy_reconnect
jid_include: true
kwargs:
proxy_name: napalm
maxrunning: 1
minutes: 1
return_job: false
enabled: true
It's possible to remove the job using schedule.delete which causes the messages to stop. I just can't work out where it's coming from, it's not the pillar or filesystem so I assume it's managed internally.
Yes, it's same for me:
root@master:~# salt XXX schedule.list
XXX:
schedule:
enabled: true
The workaround with deleting it actually works.
@garethgreenaway would you happen to know where this enabled: True comes from?
Thanks,
Daniel
I know it's kinda weird way to debug, but I'm not a coder unfortunately. Here it is:
Code in utils/schedule.py, following line 440, in Schedule.enable_schedule:
def enable_schedule(self):
'''
Enable the scheduler.
'''
print('*** DEBUG1 ***' + str(self.opts['schedule']))
self.opts['schedule']['enabled'] = True
print('*** DEBUG2 ***' + str(self.opts['schedule']))
Output:
*** DEBUG1 ***{u'__proxy_keepalive': {u'function': u'status.proxy_reconnect', u'enabled': True, u'return_job': False, u'jid_include': True, u'maxrunning': 1, u'kwargs': {u'proxy_name': 'napalm'}, u'minutes': 1}, u'__mine_interval': {u'function': u'mine.update', u'enabled': True, u'return_job': False, u'jid_include': True, u'maxrunning': 2, u'minutes': 10}}
*** DEBUG2 ***{u'enabled': True, u'__proxy_keepalive': {u'function': u'status.proxy_reconnect', u'enabled': True, u'return_job': False, u'jid_include': True, u'maxrunning': 1, u'kwargs': {u'proxy_name': 'napalm'}, u'minutes': 1}, u'__mine_interval': {u'function': u'mine.update', u'enabled': True, u'return_job': False, u'jid_include': True, u'maxrunning': 2, u'minutes': 10}}
That is how I debug :)
Ok, yes, this needs to be fixed.
The "enabled" job isn't actually a job, but determines whether or not the Salt scheduler is enabled or disabled just like individual jobs but for the whole thing. Normally it defaults to be enabled and when processing the schedule that is ignored, we might be evaluating the schedule differently on a proxy minion than on a regular minion which might result in the above error.
Yeah, we can probably get away with just popping the enabled key from the ~list~ dict before processing the schedule.
That code is in place here, https://github.com/saltstack/salt/blob/2018.3/salt/utils/schedule.py#L852
Cool! Thank you.
This is fixed in #49019
Any idea when 2018.3.3 will be released? I am waiting for this fix.
We're in the final testing phase of 2018.3.3 right now.
Most helpful comment
This is fixed in #49019